2025-05-29 00:00:08.434065 | Job console starting 2025-05-29 00:00:08.446298 | Updating git repos 2025-05-29 00:00:08.491432 | Cloning repos into workspace 2025-05-29 00:00:08.692801 | Restoring repo states 2025-05-29 00:00:08.761999 | Merging changes 2025-05-29 00:00:08.762029 | Checking out repos 2025-05-29 00:00:09.117568 | Preparing playbooks 2025-05-29 00:00:10.050911 | Running Ansible setup 2025-05-29 00:00:16.117985 | PRE-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/pre.yaml@main] 2025-05-29 00:00:17.052655 | 2025-05-29 00:00:17.053160 | PLAY [Base pre] 2025-05-29 00:00:17.069595 | 2025-05-29 00:00:17.069729 | TASK [Setup log path fact] 2025-05-29 00:00:17.111709 | orchestrator | ok 2025-05-29 00:00:17.146777 | 2025-05-29 00:00:17.146966 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-05-29 00:00:17.206377 | orchestrator | ok 2025-05-29 00:00:17.230215 | 2025-05-29 00:00:17.230372 | TASK [emit-job-header : Print job information] 2025-05-29 00:00:17.309655 | # Job Information 2025-05-29 00:00:17.309838 | Ansible Version: 2.16.14 2025-05-29 00:00:17.309873 | Job: testbed-deploy-stable-in-a-nutshell-ubuntu-24.04 2025-05-29 00:00:17.309906 | Pipeline: periodic-midnight 2025-05-29 00:00:17.309929 | Executor: 521e9411259a 2025-05-29 00:00:17.309950 | Triggered by: https://github.com/osism/testbed 2025-05-29 00:00:17.309971 | Event ID: f90b0700a5554866b75c6aea8e143b21 2025-05-29 00:00:17.328551 | 2025-05-29 00:00:17.328698 | LOOP [emit-job-header : Print node information] 2025-05-29 00:00:17.487681 | orchestrator | ok: 2025-05-29 00:00:17.487865 | orchestrator | # Node Information 2025-05-29 00:00:17.487898 | orchestrator | Inventory Hostname: orchestrator 2025-05-29 00:00:17.487923 | orchestrator | Hostname: zuul-static-regiocloud-infra-1 2025-05-29 00:00:17.487945 | orchestrator | Username: zuul-testbed01 2025-05-29 00:00:17.487966 | orchestrator | Distro: Debian 12.11 2025-05-29 00:00:17.487990 | orchestrator | Provider: static-testbed 2025-05-29 00:00:17.488012 | orchestrator | Region: 2025-05-29 00:00:17.488032 | orchestrator | Label: testbed-orchestrator 2025-05-29 00:00:17.488053 | orchestrator | Product Name: OpenStack Nova 2025-05-29 00:00:17.488073 | orchestrator | Interface IP: 81.163.193.140 2025-05-29 00:00:17.511475 | 2025-05-29 00:00:17.511609 | TASK [log-inventory : Ensure Zuul Ansible directory exists] 2025-05-29 00:00:18.173835 | orchestrator -> localhost | changed 2025-05-29 00:00:18.184831 | 2025-05-29 00:00:18.184957 | TASK [log-inventory : Copy ansible inventory to logs dir] 2025-05-29 00:00:19.366928 | orchestrator -> localhost | changed 2025-05-29 00:00:19.384447 | 2025-05-29 00:00:19.384684 | TASK [add-build-sshkey : Check to see if ssh key was already created for this build] 2025-05-29 00:00:19.714282 | orchestrator -> localhost | ok 2025-05-29 00:00:19.722385 | 2025-05-29 00:00:19.722547 | TASK [add-build-sshkey : Create a new key in workspace based on build UUID] 2025-05-29 00:00:19.763995 | orchestrator | ok 2025-05-29 00:00:19.797664 | orchestrator | included: /var/lib/zuul/builds/62ec09aac32a49de9a819e3bd5eb4892/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/create-key-and-replace.yaml 2025-05-29 00:00:19.807984 | 2025-05-29 00:00:19.808284 | TASK [add-build-sshkey : Create Temp SSH key] 2025-05-29 00:00:21.625989 | orchestrator -> localhost | Generating public/private rsa key pair. 2025-05-29 00:00:21.626309 | orchestrator -> localhost | Your identification has been saved in /var/lib/zuul/builds/62ec09aac32a49de9a819e3bd5eb4892/work/62ec09aac32a49de9a819e3bd5eb4892_id_rsa 2025-05-29 00:00:21.626362 | orchestrator -> localhost | Your public key has been saved in /var/lib/zuul/builds/62ec09aac32a49de9a819e3bd5eb4892/work/62ec09aac32a49de9a819e3bd5eb4892_id_rsa.pub 2025-05-29 00:00:21.626389 | orchestrator -> localhost | The key fingerprint is: 2025-05-29 00:00:21.626413 | orchestrator -> localhost | SHA256:yRpaObDLcYHvR0VNOCtLLgurNJOWAfWQhTotVeGM1QE zuul-build-sshkey 2025-05-29 00:00:21.626436 | orchestrator -> localhost | The key's randomart image is: 2025-05-29 00:00:21.626474 | orchestrator -> localhost | +---[RSA 3072]----+ 2025-05-29 00:00:21.626496 | orchestrator -> localhost | | o=E+.. .+. | 2025-05-29 00:00:21.626519 | orchestrator -> localhost | | .+B .. .o . | 2025-05-29 00:00:21.626573 | orchestrator -> localhost | |.+. * . .o | 2025-05-29 00:00:21.626599 | orchestrator -> localhost | |+.. + +oo. | 2025-05-29 00:00:21.626620 | orchestrator -> localhost | | o. o BoSo | 2025-05-29 00:00:21.626650 | orchestrator -> localhost | | =.B.=o | 2025-05-29 00:00:21.626671 | orchestrator -> localhost | | B +ooo. | 2025-05-29 00:00:21.626690 | orchestrator -> localhost | | o o. .. | 2025-05-29 00:00:21.626711 | orchestrator -> localhost | | .. | 2025-05-29 00:00:21.626731 | orchestrator -> localhost | +----[SHA256]-----+ 2025-05-29 00:00:21.626793 | orchestrator -> localhost | ok: Runtime: 0:00:01.063429 2025-05-29 00:00:21.637109 | 2025-05-29 00:00:21.637257 | TASK [add-build-sshkey : Remote setup ssh keys (linux)] 2025-05-29 00:00:21.685010 | orchestrator | ok 2025-05-29 00:00:21.699477 | orchestrator | included: /var/lib/zuul/builds/62ec09aac32a49de9a819e3bd5eb4892/trusted/project_1/github.com/osism/openinfra-zuul-jobs/roles/add-build-sshkey/tasks/remote-linux.yaml 2025-05-29 00:00:21.712019 | 2025-05-29 00:00:21.712161 | TASK [add-build-sshkey : Remove previously added zuul-build-sshkey] 2025-05-29 00:00:21.737374 | orchestrator | skipping: Conditional result was False 2025-05-29 00:00:21.750071 | 2025-05-29 00:00:21.750625 | TASK [add-build-sshkey : Enable access via build key on all nodes] 2025-05-29 00:00:22.394781 | orchestrator | changed 2025-05-29 00:00:22.407680 | 2025-05-29 00:00:22.408612 | TASK [add-build-sshkey : Make sure user has a .ssh] 2025-05-29 00:00:22.686522 | orchestrator | ok 2025-05-29 00:00:22.693604 | 2025-05-29 00:00:22.693718 | TASK [add-build-sshkey : Install build private key as SSH key on all nodes] 2025-05-29 00:00:23.103677 | orchestrator | ok 2025-05-29 00:00:23.109762 | 2025-05-29 00:00:23.109868 | TASK [add-build-sshkey : Install build public key as SSH key on all nodes] 2025-05-29 00:00:23.488749 | orchestrator | ok 2025-05-29 00:00:23.497631 | 2025-05-29 00:00:23.497760 | TASK [add-build-sshkey : Remote setup ssh keys (windows)] 2025-05-29 00:00:23.511261 | orchestrator | skipping: Conditional result was False 2025-05-29 00:00:23.518026 | 2025-05-29 00:00:23.518122 | TASK [remove-zuul-sshkey : Remove master key from local agent] 2025-05-29 00:00:23.898036 | orchestrator -> localhost | changed 2025-05-29 00:00:23.911502 | 2025-05-29 00:00:23.911602 | TASK [add-build-sshkey : Add back temp key] 2025-05-29 00:00:24.220654 | orchestrator -> localhost | Identity added: /var/lib/zuul/builds/62ec09aac32a49de9a819e3bd5eb4892/work/62ec09aac32a49de9a819e3bd5eb4892_id_rsa (zuul-build-sshkey) 2025-05-29 00:00:24.221099 | orchestrator -> localhost | ok: Runtime: 0:00:00.020415 2025-05-29 00:00:24.233417 | 2025-05-29 00:00:24.233566 | TASK [add-build-sshkey : Verify we can still SSH to all nodes] 2025-05-29 00:00:24.609743 | orchestrator | ok 2025-05-29 00:00:24.622028 | 2025-05-29 00:00:24.622267 | TASK [add-build-sshkey : Verify we can still SSH to all nodes (windows)] 2025-05-29 00:00:24.657721 | orchestrator | skipping: Conditional result was False 2025-05-29 00:00:24.727269 | 2025-05-29 00:00:24.727405 | TASK [start-zuul-console : Start zuul_console daemon.] 2025-05-29 00:00:25.132890 | orchestrator | ok 2025-05-29 00:00:25.169250 | 2025-05-29 00:00:25.169423 | TASK [validate-host : Define zuul_info_dir fact] 2025-05-29 00:00:25.223027 | orchestrator | ok 2025-05-29 00:00:25.241705 | 2025-05-29 00:00:25.241825 | TASK [validate-host : Ensure Zuul Ansible directory exists] 2025-05-29 00:00:25.875454 | orchestrator -> localhost | ok 2025-05-29 00:00:25.893278 | 2025-05-29 00:00:25.893412 | TASK [validate-host : Collect information about the host] 2025-05-29 00:00:27.249329 | orchestrator | ok 2025-05-29 00:00:27.268188 | 2025-05-29 00:00:27.268332 | TASK [validate-host : Sanitize hostname] 2025-05-29 00:00:27.376912 | orchestrator | ok 2025-05-29 00:00:27.382390 | 2025-05-29 00:00:27.382491 | TASK [validate-host : Write out all ansible variables/facts known for each host] 2025-05-29 00:00:28.469119 | orchestrator -> localhost | changed 2025-05-29 00:00:28.475474 | 2025-05-29 00:00:28.475604 | TASK [validate-host : Collect information about zuul worker] 2025-05-29 00:00:28.928865 | orchestrator | ok 2025-05-29 00:00:28.942723 | 2025-05-29 00:00:28.943335 | TASK [validate-host : Write out all zuul information for each host] 2025-05-29 00:00:29.771232 | orchestrator -> localhost | changed 2025-05-29 00:00:29.788507 | 2025-05-29 00:00:29.788628 | TASK [prepare-workspace-log : Start zuul_console daemon.] 2025-05-29 00:00:30.063695 | orchestrator | ok 2025-05-29 00:00:30.073133 | 2025-05-29 00:00:30.073260 | TASK [prepare-workspace-log : Synchronize src repos to workspace directory.] 2025-05-29 00:00:48.398289 | orchestrator | changed: 2025-05-29 00:00:48.398581 | orchestrator | .d..t...... src/ 2025-05-29 00:00:48.398616 | orchestrator | .d..t...... src/github.com/ 2025-05-29 00:00:48.398641 | orchestrator | .d..t...... src/github.com/osism/ 2025-05-29 00:00:48.398663 | orchestrator | .d..t...... src/github.com/osism/ansible-collection-commons/ 2025-05-29 00:00:48.398686 | orchestrator | RedHat.yml 2025-05-29 00:00:48.409423 | orchestrator | .L..t...... src/github.com/osism/ansible-collection-commons/roles/repository/tasks/CentOS.yml -> RedHat.yml 2025-05-29 00:00:48.409441 | orchestrator | RedHat.yml 2025-05-29 00:00:48.409493 | orchestrator | = 1.53.0"... 2025-05-29 00:01:02.783291 | orchestrator | 00:01:02.783 STDOUT terraform: - Finding hashicorp/local versions matching ">= 2.2.0"... 2025-05-29 00:01:04.067279 | orchestrator | 00:01:04.067 STDOUT terraform: - Installing hashicorp/null v3.2.4... 2025-05-29 00:01:05.329793 | orchestrator | 00:01:05.329 STDOUT terraform: - Installed hashicorp/null v3.2.4 (signed, key ID 0C0AF313E5FD9F80) 2025-05-29 00:01:06.595363 | orchestrator | 00:01:06.595 STDOUT terraform: - Installing terraform-provider-openstack/openstack v3.1.0... 2025-05-29 00:01:07.770628 | orchestrator | 00:01:07.770 STDOUT terraform: - Installed terraform-provider-openstack/openstack v3.1.0 (signed, key ID 4F80527A391BEFD2) 2025-05-29 00:01:09.237342 | orchestrator | 00:01:09.237 STDOUT terraform: - Installing hashicorp/local v2.5.3... 2025-05-29 00:01:10.528974 | orchestrator | 00:01:10.528 STDOUT terraform: - Installed hashicorp/local v2.5.3 (signed, key ID 0C0AF313E5FD9F80) 2025-05-29 00:01:10.529055 | orchestrator | 00:01:10.528 STDOUT terraform: Providers are signed by their developers. 2025-05-29 00:01:10.529077 | orchestrator | 00:01:10.528 STDOUT terraform: If you'd like to know more about provider signing, you can read about it here: 2025-05-29 00:01:10.529112 | orchestrator | 00:01:10.529 STDOUT terraform: https://opentofu.org/docs/cli/plugins/signing/ 2025-05-29 00:01:10.529213 | orchestrator | 00:01:10.529 STDOUT terraform: OpenTofu has created a lock file .terraform.lock.hcl to record the provider 2025-05-29 00:01:10.529273 | orchestrator | 00:01:10.529 STDOUT terraform: selections it made above. Include this file in your version control repository 2025-05-29 00:01:10.529317 | orchestrator | 00:01:10.529 STDOUT terraform: so that OpenTofu can guarantee to make the same selections by default when 2025-05-29 00:01:10.529340 | orchestrator | 00:01:10.529 STDOUT terraform: you run "tofu init" in the future. 2025-05-29 00:01:10.529891 | orchestrator | 00:01:10.529 STDOUT terraform: OpenTofu has been successfully initialized! 2025-05-29 00:01:10.529988 | orchestrator | 00:01:10.529 STDOUT terraform: You may now begin working with OpenTofu. Try running "tofu plan" to see 2025-05-29 00:01:10.530108 | orchestrator | 00:01:10.529 STDOUT terraform: any changes that are required for your infrastructure. All OpenTofu commands 2025-05-29 00:01:10.530118 | orchestrator | 00:01:10.530 STDOUT terraform: should now work. 2025-05-29 00:01:10.530163 | orchestrator | 00:01:10.530 STDOUT terraform: If you ever set or change modules or backend configuration for OpenTofu, 2025-05-29 00:01:10.530217 | orchestrator | 00:01:10.530 STDOUT terraform: rerun this command to reinitialize your working directory. If you forget, other 2025-05-29 00:01:10.530263 | orchestrator | 00:01:10.530 STDOUT terraform: commands will detect it and remind you to do so if necessary. 2025-05-29 00:01:10.709452 | orchestrator | 00:01:10.708 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed01/terraform` instead. 2025-05-29 00:01:10.931881 | orchestrator | 00:01:10.931 STDOUT terraform: Created and switched to workspace "ci"! 2025-05-29 00:01:10.931980 | orchestrator | 00:01:10.931 STDOUT terraform: You're now on a new, empty workspace. Workspaces isolate their state, 2025-05-29 00:01:10.932145 | orchestrator | 00:01:10.931 STDOUT terraform: so if you run "tofu plan" OpenTofu will not see any existing state 2025-05-29 00:01:10.932198 | orchestrator | 00:01:10.932 STDOUT terraform: for this configuration. 2025-05-29 00:01:11.163301 | orchestrator | 00:01:11.163 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed01/terraform` instead. 2025-05-29 00:01:11.290157 | orchestrator | 00:01:11.286 STDOUT terraform: ci.auto.tfvars 2025-05-29 00:01:11.294249 | orchestrator | 00:01:11.294 STDOUT terraform: default_custom.tf 2025-05-29 00:01:11.507370 | orchestrator | 00:01:11.507 WARN  The `TERRAGRUNT_TFPATH` environment variable is deprecated and will be removed in a future version of Terragrunt. Use `TG_TF_PATH=/home/zuul-testbed01/terraform` instead. 2025-05-29 00:01:12.561916 | orchestrator | 00:01:12.561 STDOUT terraform: data.openstack_networking_network_v2.public: Reading... 2025-05-29 00:01:13.123592 | orchestrator | 00:01:13.123 STDOUT terraform: data.openstack_networking_network_v2.public: Read complete after 0s [id=e6be7364-bfd8-4de7-8120-8f41c69a139a] 2025-05-29 00:01:13.319795 | orchestrator | 00:01:13.319 STDOUT terraform: OpenTofu used the selected providers to generate the following execution 2025-05-29 00:01:13.319884 | orchestrator | 00:01:13.319 STDOUT terraform: plan. Resource actions are indicated with the following symbols: 2025-05-29 00:01:13.319893 | orchestrator | 00:01:13.319 STDOUT terraform:  + create 2025-05-29 00:01:13.319901 | orchestrator | 00:01:13.319 STDOUT terraform:  <= read (data resources) 2025-05-29 00:01:13.319928 | orchestrator | 00:01:13.319 STDOUT terraform: OpenTofu will perform the following actions: 2025-05-29 00:01:13.320042 | orchestrator | 00:01:13.319 STDOUT terraform:  # data.openstack_images_image_v2.image will be read during apply 2025-05-29 00:01:13.320128 | orchestrator | 00:01:13.320 STDOUT terraform:  # (config refers to values not yet known) 2025-05-29 00:01:13.320187 | orchestrator | 00:01:13.320 STDOUT terraform:  <= data "openstack_images_image_v2" "image" { 2025-05-29 00:01:13.320242 | orchestrator | 00:01:13.320 STDOUT terraform:  + checksum = (known after apply) 2025-05-29 00:01:13.320295 | orchestrator | 00:01:13.320 STDOUT terraform:  + created_at = (known after apply) 2025-05-29 00:01:13.320350 | orchestrator | 00:01:13.320 STDOUT terraform:  + file = (known after apply) 2025-05-29 00:01:13.320403 | orchestrator | 00:01:13.320 STDOUT terraform:  + id = (known after apply) 2025-05-29 00:01:13.320460 | orchestrator | 00:01:13.320 STDOUT terraform:  + metadata = (known after apply) 2025-05-29 00:01:13.320514 | orchestrator | 00:01:13.320 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-05-29 00:01:13.320568 | orchestrator | 00:01:13.320 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-05-29 00:01:13.320605 | orchestrator | 00:01:13.320 STDOUT terraform:  + most_recent = true 2025-05-29 00:01:13.320682 | orchestrator | 00:01:13.320 STDOUT terraform:  + name = (known after apply) 2025-05-29 00:01:13.320739 | orchestrator | 00:01:13.320 STDOUT terraform:  + protected = (known after apply) 2025-05-29 00:01:13.320793 | orchestrator | 00:01:13.320 STDOUT terraform:  + region = (known after apply) 2025-05-29 00:01:13.320847 | orchestrator | 00:01:13.320 STDOUT terraform:  + schema = (known after apply) 2025-05-29 00:01:13.320900 | orchestrator | 00:01:13.320 STDOUT terraform:  + size_bytes = (known after apply) 2025-05-29 00:01:13.320956 | orchestrator | 00:01:13.320 STDOUT terraform:  + tags = (known after apply) 2025-05-29 00:01:13.321009 | orchestrator | 00:01:13.320 STDOUT terraform:  + updated_at = (known after apply) 2025-05-29 00:01:13.321035 | orchestrator | 00:01:13.321 STDOUT terraform:  } 2025-05-29 00:01:13.321144 | orchestrator | 00:01:13.321 STDOUT terraform:  # data.openstack_images_image_v2.image_node will be read during apply 2025-05-29 00:01:13.321197 | orchestrator | 00:01:13.321 STDOUT terraform:  # (config refers to values not yet known) 2025-05-29 00:01:13.321265 | orchestrator | 00:01:13.321 STDOUT terraform:  <= data "openstack_images_image_v2" "image_node" { 2025-05-29 00:01:13.321317 | orchestrator | 00:01:13.321 STDOUT terraform:  + checksum = (known after apply) 2025-05-29 00:01:13.321372 | orchestrator | 00:01:13.321 STDOUT terraform:  + created_at = (known after apply) 2025-05-29 00:01:13.321427 | orchestrator | 00:01:13.321 STDOUT terraform:  + file = (known after apply) 2025-05-29 00:01:13.321483 | orchestrator | 00:01:13.321 STDOUT terraform:  + id = (known after apply) 2025-05-29 00:01:13.321535 | orchestrator | 00:01:13.321 STDOUT terraform:  + metadata = (known after apply) 2025-05-29 00:01:13.321588 | orchestrator | 00:01:13.321 STDOUT terraform:  + min_disk_gb = (known after apply) 2025-05-29 00:01:13.321645 | orchestrator | 00:01:13.321 STDOUT terraform:  + min_ram_mb = (known after apply) 2025-05-29 00:01:13.321678 | orchestrator | 00:01:13.321 STDOUT terraform:  + most_recent = true 2025-05-29 00:01:13.321734 | orchestrator | 00:01:13.321 STDOUT terraform:  + name = (known after apply) 2025-05-29 00:01:13.321787 | orchestrator | 00:01:13.321 STDOUT terraform:  + protected = (known after apply) 2025-05-29 00:01:13.321841 | orchestrator | 00:01:13.321 STDOUT terraform:  + region = (known after apply) 2025-05-29 00:01:13.321896 | orchestrator | 00:01:13.321 STDOUT terraform:  + schema = (known after apply) 2025-05-29 00:01:13.321954 | orchestrator | 00:01:13.321 STDOUT terraform:  + size_bytes = (known after apply) 2025-05-29 00:01:13.322002 | orchestrator | 00:01:13.321 STDOUT terraform:  + tags = (known after apply) 2025-05-29 00:01:13.322093 | orchestrator | 00:01:13.321 STDOUT terraform:  + updated_at = (known after apply) 2025-05-29 00:01:13.322131 | orchestrator | 00:01:13.322 STDOUT terraform:  } 2025-05-29 00:01:13.322231 | orchestrator | 00:01:13.322 STDOUT terraform:  # local_file.MANAGER_ADDRESS will be created 2025-05-29 00:01:13.322287 | orchestrator | 00:01:13.322 STDOUT terraform:  + resource "local_file" "MANAGER_ADDRESS" { 2025-05-29 00:01:13.322354 | orchestrator | 00:01:13.322 STDOUT terraform:  + content = (known after apply) 2025-05-29 00:01:13.322419 | orchestrator | 00:01:13.322 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-05-29 00:01:13.322485 | orchestrator | 00:01:13.322 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-05-29 00:01:13.322551 | orchestrator | 00:01:13.322 STDOUT terraform:  + content_md5 = (known after apply) 2025-05-29 00:01:13.322617 | orchestrator | 00:01:13.322 STDOUT terraform:  + content_sha1 = (known after apply) 2025-05-29 00:01:13.322685 | orchestrator | 00:01:13.322 STDOUT terraform:  + content_sha256 = (known after apply) 2025-05-29 00:01:13.322754 | orchestrator | 00:01:13.322 STDOUT terraform:  + content_sha512 = (known after apply) 2025-05-29 00:01:13.322802 | orchestrator | 00:01:13.322 STDOUT terraform:  + directory_permission = "0777" 2025-05-29 00:01:13.322850 | orchestrator | 00:01:13.322 STDOUT terraform:  + file_permission = "0644" 2025-05-29 00:01:13.322918 | orchestrator | 00:01:13.322 STDOUT terraform:  + filename = ".MANAGER_ADDRESS.ci" 2025-05-29 00:01:13.322987 | orchestrator | 00:01:13.322 STDOUT terraform:  + id = (known after apply) 2025-05-29 00:01:13.323013 | orchestrator | 00:01:13.322 STDOUT terraform:  } 2025-05-29 00:01:13.323081 | orchestrator | 00:01:13.323 STDOUT terraform:  # local_file.id_rsa_pub will be created 2025-05-29 00:01:13.323154 | orchestrator | 00:01:13.323 STDOUT terraform:  + resource "local_file" "id_rsa_pub" { 2025-05-29 00:01:13.323220 | orchestrator | 00:01:13.323 STDOUT terraform:  + content = (known after apply) 2025-05-29 00:01:13.323286 | orchestrator | 00:01:13.323 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-05-29 00:01:13.323352 | orchestrator | 00:01:13.323 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-05-29 00:01:13.323417 | orchestrator | 00:01:13.323 STDOUT terraform:  + content_md5 = (known after apply) 2025-05-29 00:01:13.323482 | orchestrator | 00:01:13.323 STDOUT terraform:  + content_sha1 = (known after apply) 2025-05-29 00:01:13.323544 | orchestrator | 00:01:13.323 STDOUT terraform:  + content_sha256 = (known after apply) 2025-05-29 00:01:13.323605 | orchestrator | 00:01:13.323 STDOUT terraform:  + content_sha512 = (known after apply) 2025-05-29 00:01:13.323648 | orchestrator | 00:01:13.323 STDOUT terraform:  + directory_permission = "0777" 2025-05-29 00:01:13.323691 | orchestrator | 00:01:13.323 STDOUT terraform:  + file_permission = "0644" 2025-05-29 00:01:13.323752 | orchestrator | 00:01:13.323 STDOUT terraform:  + filename = ".id_rsa.ci.pub" 2025-05-29 00:01:13.323817 | orchestrator | 00:01:13.323 STDOUT terraform:  + id = (known after apply) 2025-05-29 00:01:13.323840 | orchestrator | 00:01:13.323 STDOUT terraform:  } 2025-05-29 00:01:13.323884 | orchestrator | 00:01:13.323 STDOUT terraform:  # local_file.inventory will be created 2025-05-29 00:01:13.323927 | orchestrator | 00:01:13.323 STDOUT terraform:  + resource "local_file" "inventory" { 2025-05-29 00:01:13.323991 | orchestrator | 00:01:13.323 STDOUT terraform:  + content = (known after apply) 2025-05-29 00:01:13.324050 | orchestrator | 00:01:13.323 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-05-29 00:01:13.324132 | orchestrator | 00:01:13.324 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-05-29 00:01:13.324192 | orchestrator | 00:01:13.324 STDOUT terraform:  + content_md5 = (known after apply) 2025-05-29 00:01:13.324261 | orchestrator | 00:01:13.324 STDOUT terraform:  + content_sha1 = (known after apply) 2025-05-29 00:01:13.324318 | orchestrator | 00:01:13.324 STDOUT terraform:  + content_sha256 = (known after apply) 2025-05-29 00:01:13.324377 | orchestrator | 00:01:13.324 STDOUT terraform:  + content_sha512 = (known after apply) 2025-05-29 00:01:13.324424 | orchestrator | 00:01:13.324 STDOUT terraform:  + directory_permission = "0777" 2025-05-29 00:01:13.324466 | orchestrator | 00:01:13.324 STDOUT terraform:  + file_permission = "0644" 2025-05-29 00:01:13.324518 | orchestrator | 00:01:13.324 STDOUT terraform:  + filename = "inventory.ci" 2025-05-29 00:01:13.324582 | orchestrator | 00:01:13.324 STDOUT terraform:  + id = (known after apply) 2025-05-29 00:01:13.324604 | orchestrator | 00:01:13.324 STDOUT terraform:  } 2025-05-29 00:01:13.324655 | orchestrator | 00:01:13.324 STDOUT terraform:  # local_sensitive_file.id_rsa will be created 2025-05-29 00:01:13.324706 | orchestrator | 00:01:13.324 STDOUT terraform:  + resource "local_sensitive_file" "id_rsa" { 2025-05-29 00:01:13.324760 | orchestrator | 00:01:13.324 STDOUT terraform:  + content = (sensitive value) 2025-05-29 00:01:13.324820 | orchestrator | 00:01:13.324 STDOUT terraform:  + content_base64sha256 = (known after apply) 2025-05-29 00:01:13.324880 | orchestrator | 00:01:13.324 STDOUT terraform:  + content_base64sha512 = (known after apply) 2025-05-29 00:01:13.324940 | orchestrator | 00:01:13.324 STDOUT terraform:  + content_md5 = (known after apply) 2025-05-29 00:01:13.325001 | orchestrator | 00:01:13.324 STDOUT terraform:  + content_sha1 = (known after apply) 2025-05-29 00:01:13.325103 | orchestrator | 00:01:13.324 STDOUT terraform:  + content_sha256 = (known after apply) 2025-05-29 00:01:13.325141 | orchestrator | 00:01:13.325 STDOUT terraform:  + content_sha512 = (known after apply) 2025-05-29 00:01:13.325182 | orchestrator | 00:01:13.325 STDOUT terraform:  + directory_permission = "0700" 2025-05-29 00:01:13.325225 | orchestrator | 00:01:13.325 STDOUT terraform:  + file_permission = "0600" 2025-05-29 00:01:13.325276 | orchestrator | 00:01:13.325 STDOUT terraform:  + filename = ".id_rsa.ci" 2025-05-29 00:01:13.325341 | orchestrator | 00:01:13.325 STDOUT terraform:  + id = (known after apply) 2025-05-29 00:01:13.325362 | orchestrator | 00:01:13.325 STDOUT terraform:  } 2025-05-29 00:01:13.325414 | orchestrator | 00:01:13.325 STDOUT terraform:  # null_resource.node_semaphore will be created 2025-05-29 00:01:13.325465 | orchestrator | 00:01:13.325 STDOUT terraform:  + resource "null_resource" "node_semaphore" { 2025-05-29 00:01:13.325502 | orchestrator | 00:01:13.325 STDOUT terraform:  + id = (known after apply) 2025-05-29 00:01:13.325579 | orchestrator | 00:01:13.325 STDOUT terraform:  } 2025-05-29 00:01:13.325665 | orchestrator | 00:01:13.325 STDOUT terraform:  # openstack_blockstorage_volume_v3.manager_base_volume[0] will be created 2025-05-29 00:01:13.325745 | orchestrator | 00:01:13.325 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "manager_base_volume" { 2025-05-29 00:01:13.325806 | orchestrator | 00:01:13.325 STDOUT terraform:  + attachment = (known after apply) 2025-05-29 00:01:13.325848 | orchestrator | 00:01:13.325 STDOUT terraform:  + availability_zone = "nova" 2025-05-29 00:01:13.325911 | orchestrator | 00:01:13.325 STDOUT terraform:  + id = (known after apply) 2025-05-29 00:01:13.325975 | orchestrator | 00:01:13.325 STDOUT terraform:  + image_id = (known after apply) 2025-05-29 00:01:13.326057 | orchestrator | 00:01:13.325 STDOUT terraform:  + metadata = (known after apply) 2025-05-29 00:01:13.326190 | orchestrator | 00:01:13.326 STDOUT terraform:  + name = "testbed-volume-manager-base" 2025-05-29 00:01:13.326263 | orchestrator | 00:01:13.326 STDOUT terraform:  + region = (known after apply) 2025-05-29 00:01:13.326307 | orchestrator | 00:01:13.326 STDOUT terraform:  + size = 80 2025-05-29 00:01:13.326350 | orchestrator | 00:01:13.326 STDOUT terraform:  + volume_retype_policy = "never" 2025-05-29 00:01:13.326391 | orchestrator | 00:01:13.326 STDOUT terraform:  + volume_type = "ssd" 2025-05-29 00:01:13.326449 | orchestrator | 00:01:13.326 STDOUT terraform:  } 2025-05-29 00:01:13.326504 | orchestrator | 00:01:13.326 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[0] will be created 2025-05-29 00:01:13.326576 | orchestrator | 00:01:13.326 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-05-29 00:01:13.326630 | orchestrator | 00:01:13.326 STDOUT terraform:  + attachment = (known after apply) 2025-05-29 00:01:13.326668 | orchestrator | 00:01:13.326 STDOUT terraform:  + availability_zone = "nova" 2025-05-29 00:01:13.326723 | orchestrator | 00:01:13.326 STDOUT terraform:  + id = (known after apply) 2025-05-29 00:01:13.326778 | orchestrator | 00:01:13.326 STDOUT terraform:  + image_id = (known after apply) 2025-05-29 00:01:13.326833 | orchestrator | 00:01:13.326 STDOUT terraform:  + metadata = (known after apply) 2025-05-29 00:01:13.326901 | orchestrator | 00:01:13.326 STDOUT terraform:  + name = "testbed-volume-0-node-base" 2025-05-29 00:01:13.326954 | orchestrator | 00:01:13.326 STDOUT terraform:  + region = (known after apply) 2025-05-29 00:01:13.326988 | orchestrator | 00:01:13.326 STDOUT terraform:  + size = 80 2025-05-29 00:01:13.327025 | orchestrator | 00:01:13.326 STDOUT terraform:  + volume_retype_policy = "never" 2025-05-29 00:01:13.327060 | orchestrator | 00:01:13.327 STDOUT terraform:  + volume_type = "ssd" 2025-05-29 00:01:13.327112 | orchestrator | 00:01:13.327 STDOUT terraform:  } 2025-05-29 00:01:13.327163 | orchestrator | 00:01:13.327 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[1] will be created 2025-05-29 00:01:13.327231 | orchestrator | 00:01:13.327 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-05-29 00:01:13.327289 | orchestrator | 00:01:13.327 STDOUT terraform:  + attachment = (known after apply) 2025-05-29 00:01:13.327320 | orchestrator | 00:01:13.327 STDOUT terraform:  + availability_zone = "nova" 2025-05-29 00:01:13.327374 | orchestrator | 00:01:13.327 STDOUT terraform:  + id = (known after apply) 2025-05-29 00:01:13.327428 | orchestrator | 00:01:13.327 STDOUT terraform:  + image_id = (known after apply) 2025-05-29 00:01:13.327481 | orchestrator | 00:01:13.327 STDOUT terraform:  + metadata = (known after apply) 2025-05-29 00:01:13.327548 | orchestrator | 00:01:13.327 STDOUT terraform:  + name = "testbed-volume-1-node-base" 2025-05-29 00:01:13.327603 | orchestrator | 00:01:13.327 STDOUT terraform:  + region = (known after apply) 2025-05-29 00:01:13.327633 | orchestrator | 00:01:13.327 STDOUT terraform:  + size = 80 2025-05-29 00:01:13.327669 | orchestrator | 00:01:13.327 STDOUT terraform:  + volume_retype_policy = "never" 2025-05-29 00:01:13.327705 | orchestrator | 00:01:13.327 STDOUT terraform:  + volume_type = "ssd" 2025-05-29 00:01:13.327724 | orchestrator | 00:01:13.327 STDOUT terraform:  } 2025-05-29 00:01:13.327794 | orchestrator | 00:01:13.327 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[2] will be created 2025-05-29 00:01:13.327861 | orchestrator | 00:01:13.327 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-05-29 00:01:13.327914 | orchestrator | 00:01:13.327 STDOUT terraform:  + attachment = (known after apply) 2025-05-29 00:01:13.327949 | orchestrator | 00:01:13.327 STDOUT terraform:  + availability_zone = "nova" 2025-05-29 00:01:13.328004 | orchestrator | 00:01:13.327 STDOUT terraform:  + id = (known after apply) 2025-05-29 00:01:13.328056 | orchestrator | 00:01:13.328 STDOUT terraform:  + image_id = (known after apply) 2025-05-29 00:01:13.328129 | orchestrator | 00:01:13.328 STDOUT terraform:  + metadata = (known after apply) 2025-05-29 00:01:13.328195 | orchestrator | 00:01:13.328 STDOUT terraform:  + name = "testbed-volume-2-node-base" 2025-05-29 00:01:13.328250 | orchestrator | 00:01:13.328 STDOUT terraform:  + region = (known after apply) 2025-05-29 00:01:13.328282 | orchestrator | 00:01:13.328 STDOUT terraform:  + size = 80 2025-05-29 00:01:13.328319 | orchestrator | 00:01:13.328 STDOUT terraform:  + volume_retype_policy = "never" 2025-05-29 00:01:13.328356 | orchestrator | 00:01:13.328 STDOUT terraform:  + volume_type = "ssd" 2025-05-29 00:01:13.328376 | orchestrator | 00:01:13.328 STDOUT terraform:  } 2025-05-29 00:01:13.328447 | orchestrator | 00:01:13.328 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[3] will be created 2025-05-29 00:01:13.328515 | orchestrator | 00:01:13.328 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-05-29 00:01:13.328568 | orchestrator | 00:01:13.328 STDOUT terraform:  + attachment = (known after apply) 2025-05-29 00:01:13.328603 | orchestrator | 00:01:13.328 STDOUT terraform:  + availability_zone = "nova" 2025-05-29 00:01:13.328660 | orchestrator | 00:01:13.328 STDOUT terraform:  + id = (known after apply) 2025-05-29 00:01:13.328714 | orchestrator | 00:01:13.328 STDOUT terraform:  + image_id = (known after apply) 2025-05-29 00:01:13.328768 | orchestrator | 00:01:13.328 STDOUT terraform:  + metadata = (known after apply) 2025-05-29 00:01:13.328834 | orchestrator | 00:01:13.328 STDOUT terraform:  + name = "testbed-volume-3-node-base" 2025-05-29 00:01:13.328890 | orchestrator | 00:01:13.328 STDOUT terraform:  + region = (known after apply) 2025-05-29 00:01:13.328925 | orchestrator | 00:01:13.328 STDOUT terraform:  + size = 80 2025-05-29 00:01:13.328961 | orchestrator | 00:01:13.328 STDOUT terraform:  + volume_retype_policy = "never" 2025-05-29 00:01:13.328998 | orchestrator | 00:01:13.328 STDOUT terraform:  + volume_type = "ssd" 2025-05-29 00:01:13.329019 | orchestrator | 00:01:13.328 STDOUT terraform:  } 2025-05-29 00:01:13.329108 | orchestrator | 00:01:13.329 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[4] will be created 2025-05-29 00:01:13.329181 | orchestrator | 00:01:13.329 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-05-29 00:01:13.329234 | orchestrator | 00:01:13.329 STDOUT terraform:  + attachment = (known after apply) 2025-05-29 00:01:13.329269 | orchestrator | 00:01:13.329 STDOUT terraform:  + availability_zone = "nova" 2025-05-29 00:01:13.329324 | orchestrator | 00:01:13.329 STDOUT terraform:  + id = (known after apply) 2025-05-29 00:01:13.329377 | orchestrator | 00:01:13.329 STDOUT terraform:  + image_id = (known after apply) 2025-05-29 00:01:13.329432 | orchestrator | 00:01:13.329 STDOUT terraform:  + metadata = (known after apply) 2025-05-29 00:01:13.329501 | orchestrator | 00:01:13.329 STDOUT terraform:  + name = "testbed-volume-4-node-base" 2025-05-29 00:01:13.329555 | orchestrator | 00:01:13.329 STDOUT terraform:  + region = (known after apply) 2025-05-29 00:01:13.329586 | orchestrator | 00:01:13.329 STDOUT terraform:  + size = 80 2025-05-29 00:01:13.329627 | orchestrator | 00:01:13.329 STDOUT terraform:  + volume_retype_policy = "never" 2025-05-29 00:01:13.329658 | orchestrator | 00:01:13.329 STDOUT terraform:  + volume_type = "ssd" 2025-05-29 00:01:13.329679 | orchestrator | 00:01:13.329 STDOUT terraform:  } 2025-05-29 00:01:13.329750 | orchestrator | 00:01:13.329 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_base_volume[5] will be created 2025-05-29 00:01:13.329818 | orchestrator | 00:01:13.329 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_base_volume" { 2025-05-29 00:01:13.329872 | orchestrator | 00:01:13.329 STDOUT terraform:  + attachment = (known after apply) 2025-05-29 00:01:13.329911 | orchestrator | 00:01:13.329 STDOUT terraform:  + availability_zone = "nova" 2025-05-29 00:01:13.329966 | orchestrator | 00:01:13.329 STDOUT terraform:  + id = (known after apply) 2025-05-29 00:01:13.330047 | orchestrator | 00:01:13.329 STDOUT terraform:  + image_id = (known after apply) 2025-05-29 00:01:13.330132 | orchestrator | 00:01:13.330 STDOUT terraform:  + metadata = (known after apply) 2025-05-29 00:01:13.330192 | orchestrator | 00:01:13.330 STDOUT terraform:  + name = "testbed-volume-5-node-base" 2025-05-29 00:01:13.330245 | orchestrator | 00:01:13.330 STDOUT terraform:  + region = (known after apply) 2025-05-29 00:01:13.330274 | orchestrator | 00:01:13.330 STDOUT terraform:  + size = 80 2025-05-29 00:01:13.330306 | orchestrator | 00:01:13.330 STDOUT terraform:  + volume_retype_policy = "never" 2025-05-29 00:01:13.330340 | orchestrator | 00:01:13.330 STDOUT terraform:  + volume_type = "ssd" 2025-05-29 00:01:13.330359 | orchestrator | 00:01:13.330 STDOUT terraform:  } 2025-05-29 00:01:13.330455 | orchestrator | 00:01:13.330 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[0] will be created 2025-05-29 00:01:13.330512 | orchestrator | 00:01:13.330 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-29 00:01:13.330587 | orchestrator | 00:01:13.330 STDOUT terraform:  + attachment = (known after apply) 2025-05-29 00:01:13.330636 | orchestrator | 00:01:13.330 STDOUT terraform:  + availability_zone = "nova" 2025-05-29 00:01:13.330710 | orchestrator | 00:01:13.330 STDOUT terraform:  + id = (known after apply) 2025-05-29 00:01:13.330759 | orchestrator | 00:01:13.330 STDOUT terraform:  + metadata = (known after apply) 2025-05-29 00:01:13.330813 | orchestrator | 00:01:13.330 STDOUT terraform:  + name = "testbed-volume-0-node-3" 2025-05-29 00:01:13.330862 | orchestrator | 00:01:13.330 STDOUT terraform:  + region = (known after apply) 2025-05-29 00:01:13.330890 | orchestrator | 00:01:13.330 STDOUT terraform:  + size = 20 2025-05-29 00:01:13.330924 | orchestrator | 00:01:13.330 STDOUT terraform:  + volume_retype_policy = "never" 2025-05-29 00:01:13.330958 | orchestrator | 00:01:13.330 STDOUT terraform:  + volume_type = "ssd" 2025-05-29 00:01:13.330977 | orchestrator | 00:01:13.330 STDOUT terraform:  } 2025-05-29 00:01:13.331048 | orchestrator | 00:01:13.330 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[1] will be created 2025-05-29 00:01:13.331134 | orchestrator | 00:01:13.331 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-29 00:01:13.331177 | orchestrator | 00:01:13.331 STDOUT terraform:  + attachment = (known after apply) 2025-05-29 00:01:13.331210 | orchestrator | 00:01:13.331 STDOUT terraform:  + availability_zone = "nova" 2025-05-29 00:01:13.331258 | orchestrator | 00:01:13.331 STDOUT terraform:  + id = (known after apply) 2025-05-29 00:01:13.331305 | orchestrator | 00:01:13.331 STDOUT terraform:  + metadata = (known after apply) 2025-05-29 00:01:13.331361 | orchestrator | 00:01:13.331 STDOUT terraform:  + name = "testbed-volume-1-node-4" 2025-05-29 00:01:13.331406 | orchestrator | 00:01:13.331 STDOUT terraform:  + region = (known after apply) 2025-05-29 00:01:13.331436 | orchestrator | 00:01:13.331 STDOUT terraform:  + size = 20 2025-05-29 00:01:13.331469 | orchestrator | 00:01:13.331 STDOUT terraform:  + volume_retype_policy = "never" 2025-05-29 00:01:13.331502 | orchestrator | 00:01:13.331 STDOUT terraform:  + volume_type = "ssd" 2025-05-29 00:01:13.331519 | orchestrator | 00:01:13.331 STDOUT terraform:  } 2025-05-29 00:01:13.331578 | orchestrator | 00:01:13.331 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[2] will be created 2025-05-29 00:01:13.331635 | orchestrator | 00:01:13.331 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-29 00:01:13.331684 | orchestrator | 00:01:13.331 STDOUT terraform:  + attachment = (known after apply) 2025-05-29 00:01:13.331717 | orchestrator | 00:01:13.331 STDOUT terraform:  + availability_zone = "nova" 2025-05-29 00:01:13.331767 | orchestrator | 00:01:13.331 STDOUT terraform:  + id = (known after apply) 2025-05-29 00:01:13.331815 | orchestrator | 00:01:13.331 STDOUT terraform:  + metadata = (known after apply) 2025-05-29 00:01:13.331867 | orchestrator | 00:01:13.331 STDOUT terraform:  + name = "testbed-volume-2-node-5" 2025-05-29 00:01:13.331914 | orchestrator | 00:01:13.331 STDOUT terraform:  + region = (known after apply) 2025-05-29 00:01:13.331942 | orchestrator | 00:01:13.331 STDOUT terraform:  + size = 20 2025-05-29 00:01:13.331975 | orchestrator | 00:01:13.331 STDOUT terraform:  + volume_retype_policy = "never" 2025-05-29 00:01:13.332008 | orchestrator | 00:01:13.331 STDOUT terraform:  + volume_type = "ssd" 2025-05-29 00:01:13.332026 | orchestrator | 00:01:13.332 STDOUT terraform:  } 2025-05-29 00:01:13.332119 | orchestrator | 00:01:13.332 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[3] will be created 2025-05-29 00:01:13.332176 | orchestrator | 00:01:13.332 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-29 00:01:13.332223 | orchestrator | 00:01:13.332 STDOUT terraform:  + attachment = (known after apply) 2025-05-29 00:01:13.332258 | orchestrator | 00:01:13.332 STDOUT terraform:  + availability_zone = "nova" 2025-05-29 00:01:13.332304 | orchestrator | 00:01:13.332 STDOUT terraform:  + id = (known after apply) 2025-05-29 00:01:13.332349 | orchestrator | 00:01:13.332 STDOUT terraform:  + metadata = (known after apply) 2025-05-29 00:01:13.332400 | orchestrator | 00:01:13.332 STDOUT terraform:  + name = "testbed-volume-3-node-3" 2025-05-29 00:01:13.332446 | orchestrator | 00:01:13.332 STDOUT terraform:  + region = (known after apply) 2025-05-29 00:01:13.332472 | orchestrator | 00:01:13.332 STDOUT terraform:  + size = 20 2025-05-29 00:01:13.332510 | orchestrator | 00:01:13.332 STDOUT terraform:  + volume_retype_policy = "never" 2025-05-29 00:01:13.332536 | orchestrator | 00:01:13.332 STDOUT terraform:  + volume_type = "ssd" 2025-05-29 00:01:13.332554 | orchestrator | 00:01:13.332 STDOUT terraform:  } 2025-05-29 00:01:13.332610 | orchestrator | 00:01:13.332 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[4] will be created 2025-05-29 00:01:13.332672 | orchestrator | 00:01:13.332 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-29 00:01:13.332723 | orchestrator | 00:01:13.332 STDOUT terraform:  + attachment = (known after apply) 2025-05-29 00:01:13.332755 | orchestrator | 00:01:13.332 STDOUT terraform:  + availability_zone = "nova" 2025-05-29 00:01:13.332801 | orchestrator | 00:01:13.332 STDOUT terraform:  + id = (known after apply) 2025-05-29 00:01:13.332848 | orchestrator | 00:01:13.332 STDOUT terraform:  + metadata = (known after apply) 2025-05-29 00:01:13.332902 | orchestrator | 00:01:13.332 STDOUT terraform:  + name = "testbed-volume-4-node-4" 2025-05-29 00:01:13.332946 | orchestrator | 00:01:13.332 STDOUT terraform:  + region = (known after apply) 2025-05-29 00:01:13.332972 | orchestrator | 00:01:13.332 STDOUT terraform:  + size = 20 2025-05-29 00:01:13.333002 | orchestrator | 00:01:13.332 STDOUT terraform:  + volume_retype_policy = "never" 2025-05-29 00:01:13.333033 | orchestrator | 00:01:13.333 STDOUT terraform:  + volume_type = "ssd" 2025-05-29 00:01:13.333047 | orchestrator | 00:01:13.333 STDOUT terraform:  } 2025-05-29 00:01:13.333187 | orchestrator | 00:01:13.333 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[5] will be created 2025-05-29 00:01:13.333241 | orchestrator | 00:01:13.333 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-29 00:01:13.333286 | orchestrator | 00:01:13.333 STDOUT terraform:  + attachment = (known after apply) 2025-05-29 00:01:13.333316 | orchestrator | 00:01:13.333 STDOUT terraform:  + availability_zone = "nova" 2025-05-29 00:01:13.333363 | orchestrator | 00:01:13.333 STDOUT terraform:  + id = (known after apply) 2025-05-29 00:01:13.333409 | orchestrator | 00:01:13.333 STDOUT terraform:  + metadata = (known after apply) 2025-05-29 00:01:13.333459 | orchestrator | 00:01:13.333 STDOUT terraform:  + name = "testbed-volume-5-node-5" 2025-05-29 00:01:13.333504 | orchestrator | 00:01:13.333 STDOUT terraform:  + region = (known after apply) 2025-05-29 00:01:13.333533 | orchestrator | 00:01:13.333 STDOUT terraform:  + size = 20 2025-05-29 00:01:13.333564 | orchestrator | 00:01:13.333 STDOUT terraform:  + volume_retype_policy = "never" 2025-05-29 00:01:13.333594 | orchestrator | 00:01:13.333 STDOUT terraform:  + volume_type = "ssd" 2025-05-29 00:01:13.333611 | orchestrator | 00:01:13.333 STDOUT terraform:  } 2025-05-29 00:01:13.333669 | orchestrator | 00:01:13.333 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[6] will be created 2025-05-29 00:01:13.333723 | orchestrator | 00:01:13.333 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-29 00:01:13.333767 | orchestrator | 00:01:13.333 STDOUT terraform:  + attachment = (known after apply) 2025-05-29 00:01:13.333798 | orchestrator | 00:01:13.333 STDOUT terraform:  + availability_zone = "nova" 2025-05-29 00:01:13.333844 | orchestrator | 00:01:13.333 STDOUT terraform:  + id = (known after apply) 2025-05-29 00:01:13.333888 | orchestrator | 00:01:13.333 STDOUT terraform:  + metadata = (known after apply) 2025-05-29 00:01:13.333937 | orchestrator | 00:01:13.333 STDOUT terraform:  + name = "testbed-volume-6-node-3" 2025-05-29 00:01:13.333984 | orchestrator | 00:01:13.333 STDOUT terraform:  + region = (known after apply) 2025-05-29 00:01:13.334012 | orchestrator | 00:01:13.333 STDOUT terraform:  + size = 20 2025-05-29 00:01:13.334082 | orchestrator | 00:01:13.334 STDOUT terraform:  + volume_retype_policy = "never" 2025-05-29 00:01:13.334111 | orchestrator | 00:01:13.334 STDOUT terraform:  + volume_type = "ssd" 2025-05-29 00:01:13.334128 | orchestrator | 00:01:13.334 STDOUT terraform:  } 2025-05-29 00:01:13.334184 | orchestrator | 00:01:13.334 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[7] will be created 2025-05-29 00:01:13.334238 | orchestrator | 00:01:13.334 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-29 00:01:13.334285 | orchestrator | 00:01:13.334 STDOUT terraform:  + attachment = (known after apply) 2025-05-29 00:01:13.334318 | orchestrator | 00:01:13.334 STDOUT terraform:  + availability_zone = "nova" 2025-05-29 00:01:13.334364 | orchestrator | 00:01:13.334 STDOUT terraform:  + id = (known after apply) 2025-05-29 00:01:13.334408 | orchestrator | 00:01:13.334 STDOUT terraform:  + metadata = (known after apply) 2025-05-29 00:01:13.334456 | orchestrator | 00:01:13.334 STDOUT terraform:  + name = "testbed-volume-7-node-4" 2025-05-29 00:01:13.334501 | orchestrator | 00:01:13.334 STDOUT terraform:  + region = (known after apply) 2025-05-29 00:01:13.334528 | orchestrator | 00:01:13.334 STDOUT terraform:  + size = 20 2025-05-29 00:01:13.334558 | orchestrator | 00:01:13.334 STDOUT terraform:  + volume_retype_policy = "never" 2025-05-29 00:01:13.334590 | orchestrator | 00:01:13.334 STDOUT terraform:  + volume_type = "ssd" 2025-05-29 00:01:13.334607 | orchestrator | 00:01:13.334 STDOUT terraform:  } 2025-05-29 00:01:13.334681 | orchestrator | 00:01:13.334 STDOUT terraform:  # openstack_blockstorage_volume_v3.node_volume[8] will be created 2025-05-29 00:01:13.334763 | orchestrator | 00:01:13.334 STDOUT terraform:  + resource "openstack_blockstorage_volume_v3" "node_volume" { 2025-05-29 00:01:13.334842 | orchestrator | 00:01:13.334 STDOUT terraform:  + attachment = (known after apply) 2025-05-29 00:01:13.334895 | orchestrator | 00:01:13.334 STDOUT terraform:  + availability_zone = "nova" 2025-05-29 00:01:13.334953 | orchestrator | 00:01:13.334 STDOUT terraform:  + id = (known after apply) 2025-05-29 00:01:13.334999 | orchestrator | 00:01:13.334 STDOUT terraform:  + metadata = (known after apply) 2025-05-29 00:01:13.335049 | orchestrator | 00:01:13.334 STDOUT terraform:  + name = "testbed-volume-8-node-5" 2025-05-29 00:01:13.335127 | orchestrator | 00:01:13.335 STDOUT terraform:  + region = (known after apply) 2025-05-29 00:01:13.335156 | orchestrator | 00:01:13.335 STDOUT terraform:  + size = 20 2025-05-29 00:01:13.335188 | orchestrator | 00:01:13.335 STDOUT terraform:  + volume_retype_policy = "never" 2025-05-29 00:01:13.335220 | orchestrator | 00:01:13.335 STDOUT terraform:  + volume_type = "ssd" 2025-05-29 00:01:13.335238 | orchestrator | 00:01:13.335 STDOUT terraform:  } 2025-05-29 00:01:13.335293 | orchestrator | 00:01:13.335 STDOUT terraform:  # openstack_compute_instance_v2.manager_server will be created 2025-05-29 00:01:13.335344 | orchestrator | 00:01:13.335 STDOUT terraform:  + resource "openstack_compute_instance_v2" "manager_server" { 2025-05-29 00:01:13.335385 | orchestrator | 00:01:13.335 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-05-29 00:01:13.335426 | orchestrator | 00:01:13.335 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-05-29 00:01:13.335465 | orchestrator | 00:01:13.335 STDOUT terraform:  + all_metadata = (known after apply) 2025-05-29 00:01:13.335505 | orchestrator | 00:01:13.335 STDOUT terraform:  + all_tags = (known after apply) 2025-05-29 00:01:13.335532 | orchestrator | 00:01:13.335 STDOUT terraform:  + availability_zone = "nova" 2025-05-29 00:01:13.335556 | orchestrator | 00:01:13.335 STDOUT terraform:  + config_drive = true 2025-05-29 00:01:13.335596 | orchestrator | 00:01:13.335 STDOUT terraform:  + created = (known after apply) 2025-05-29 00:01:13.335638 | orchestrator | 00:01:13.335 STDOUT terraform:  + flavor_id = (known after apply) 2025-05-29 00:01:13.335673 | orchestrator | 00:01:13.335 STDOUT terraform:  + flavor_name = "OSISM-4V-16" 2025-05-29 00:01:13.335701 | orchestrator | 00:01:13.335 STDOUT terraform:  + force_delete = false 2025-05-29 00:01:13.335740 | orchestrator | 00:01:13.335 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-05-29 00:01:13.335781 | orchestrator | 00:01:13.335 STDOUT terraform:  + id = (known after apply) 2025-05-29 00:01:13.335825 | orchestrator | 00:01:13.335 STDOUT terraform:  + image_id = (known after apply) 2025-05-29 00:01:13.335861 | orchestrator | 00:01:13.335 STDOUT terraform:  + image_name = (known after apply) 2025-05-29 00:01:13.335890 | orchestrator | 00:01:13.335 STDOUT terraform:  + key_pair = "testbed" 2025-05-29 00:01:13.335925 | orchestrator | 00:01:13.335 STDOUT terraform:  + name = "testbed-manager" 2025-05-29 00:01:13.335954 | orchestrator | 00:01:13.335 STDOUT terraform:  + power_state = "active" 2025-05-29 00:01:13.335996 | orchestrator | 00:01:13.335 STDOUT terraform:  + region = (known after apply) 2025-05-29 00:01:13.336036 | orchestrator | 00:01:13.335 STDOUT terraform:  + security_groups = (known after apply) 2025-05-29 00:01:13.336165 | orchestrator | 00:01:13.336 STDOUT terraform:  + stop_before_destroy = false 2025-05-29 00:01:13.336188 | orchestrator | 00:01:13.336 STDOUT terraform:  + updated = (known after apply) 2025-05-29 00:01:13.336193 | orchestrator | 00:01:13.336 STDOUT terraform:  + user_data = (known after apply) 2025-05-29 00:01:13.336197 | orchestrator | 00:01:13.336 STDOUT terraform:  + block_device { 2025-05-29 00:01:13.336202 | orchestrator | 00:01:13.336 STDOUT terraform:  + boot_index = 0 2025-05-29 00:01:13.336222 | orchestrator | 00:01:13.336 STDOUT terraform:  + delete_on_termination = false 2025-05-29 00:01:13.336256 | orchestrator | 00:01:13.336 STDOUT terraform:  + destination_type = "volume" 2025-05-29 00:01:13.336288 | orchestrator | 00:01:13.336 STDOUT terraform:  + multiattach = false 2025-05-29 00:01:13.336322 | orchestrator | 00:01:13.336 STDOUT terraform:  + source_type = "volume" 2025-05-29 00:01:13.336367 | orchestrator | 00:01:13.336 STDOUT terraform:  + uuid = (known after apply) 2025-05-29 00:01:13.336384 | orchestrator | 00:01:13.336 STDOUT terraform:  } 2025-05-29 00:01:13.336400 | orchestrator | 00:01:13.336 STDOUT terraform:  + network { 2025-05-29 00:01:13.336424 | orchestrator | 00:01:13.336 STDOUT terraform:  + access_network = false 2025-05-29 00:01:13.336459 | orchestrator | 00:01:13.336 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-05-29 00:01:13.336496 | orchestrator | 00:01:13.336 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-05-29 00:01:13.336533 | orchestrator | 00:01:13.336 STDOUT terraform:  + mac = (known after apply) 2025-05-29 00:01:13.336569 | orchestrator | 00:01:13.336 STDOUT terraform:  + name = (known after apply) 2025-05-29 00:01:13.336605 | orchestrator | 00:01:13.336 STDOUT terraform:  + port = (known after apply) 2025-05-29 00:01:13.336643 | orchestrator | 00:01:13.336 STDOUT terraform:  + uuid = (known after apply) 2025-05-29 00:01:13.336657 | orchestrator | 00:01:13.336 STDOUT terraform:  } 2025-05-29 00:01:13.336664 | orchestrator | 00:01:13.336 STDOUT terraform:  } 2025-05-29 00:01:13.336716 | orchestrator | 00:01:13.336 STDOUT terraform:  # openstack_compute_instance_v2.node_server[0] will be created 2025-05-29 00:01:13.336763 | orchestrator | 00:01:13.336 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-05-29 00:01:13.336804 | orchestrator | 00:01:13.336 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-05-29 00:01:13.336843 | orchestrator | 00:01:13.336 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-05-29 00:01:13.336883 | orchestrator | 00:01:13.336 STDOUT terraform:  + all_metadata = (known after apply) 2025-05-29 00:01:13.336924 | orchestrator | 00:01:13.336 STDOUT terraform:  + all_tags = (known after apply) 2025-05-29 00:01:13.336953 | orchestrator | 00:01:13.336 STDOUT terraform:  + availability_zone = "nova" 2025-05-29 00:01:13.336976 | orchestrator | 00:01:13.336 STDOUT terraform:  + config_drive = true 2025-05-29 00:01:13.337016 | orchestrator | 00:01:13.336 STDOUT terraform:  + created = (known after apply) 2025-05-29 00:01:13.337056 | orchestrator | 00:01:13.337 STDOUT terraform:  + flavor_id = (known after apply) 2025-05-29 00:01:13.337105 | orchestrator | 00:01:13.337 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-05-29 00:01:13.337132 | orchestrator | 00:01:13.337 STDOUT terraform:  + force_delete = false 2025-05-29 00:01:13.337171 | orchestrator | 00:01:13.337 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-05-29 00:01:13.337211 | orchestrator | 00:01:13.337 STDOUT terraform:  + id = (known after apply) 2025-05-29 00:01:13.337251 | orchestrator | 00:01:13.337 STDOUT terraform:  + image_id = (known after apply) 2025-05-29 00:01:13.337294 | orchestrator | 00:01:13.337 STDOUT terraform:  + image_name = (known after apply) 2025-05-29 00:01:13.337323 | orchestrator | 00:01:13.337 STDOUT terraform:  + key_pair = "testbed" 2025-05-29 00:01:13.337358 | orchestrator | 00:01:13.337 STDOUT terraform:  + name = "testbed-node-0" 2025-05-29 00:01:13.337386 | orchestrator | 00:01:13.337 STDOUT terraform:  + power_state = "active" 2025-05-29 00:01:13.337434 | orchestrator | 00:01:13.337 STDOUT terraform:  + region = (known after apply) 2025-05-29 00:01:13.337465 | orchestrator | 00:01:13.337 STDOUT terraform:  + security_groups = (known after apply) 2025-05-29 00:01:13.337491 | orchestrator | 00:01:13.337 STDOUT terraform:  + stop_before_destroy = false 2025-05-29 00:01:13.337531 | orchestrator | 00:01:13.337 STDOUT terraform:  + updated = (known after apply) 2025-05-29 00:01:13.337588 | orchestrator | 00:01:13.337 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-05-29 00:01:13.337607 | orchestrator | 00:01:13.337 STDOUT terraform:  + block_device { 2025-05-29 00:01:13.337636 | orchestrator | 00:01:13.337 STDOUT terraform:  + boot_index = 0 2025-05-29 00:01:13.337667 | orchestrator | 00:01:13.337 STDOUT terraform:  + delete_on_termination = false 2025-05-29 00:01:13.337702 | orchestrator | 00:01:13.337 STDOUT terraform:  + destination_type = "volume" 2025-05-29 00:01:13.337735 | orchestrator | 00:01:13.337 STDOUT terraform:  + multiattach = false 2025-05-29 00:01:13.337772 | orchestrator | 00:01:13.337 STDOUT terraform:  + source_type = "volume" 2025-05-29 00:01:13.337815 | orchestrator | 00:01:13.337 STDOUT terraform:  + uuid = (known after apply) 2025-05-29 00:01:13.337830 | orchestrator | 00:01:13.337 STDOUT terraform:  } 2025-05-29 00:01:13.340952 | orchestrator | 00:01:13.337 STDOUT terraform:  + network { 2025-05-29 00:01:13.341620 | orchestrator | 00:01:13.340 STDOUT terraform:  + access_network = false 2025-05-29 00:01:13.341696 | orchestrator | 00:01:13.341 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-05-29 00:01:13.341746 | orchestrator | 00:01:13.341 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-05-29 00:01:13.341787 | orchestrator | 00:01:13.341 STDOUT terraform:  + mac = (known after apply) 2025-05-29 00:01:13.341829 | orchestrator | 00:01:13.341 STDOUT terraform:  + name = (known after apply) 2025-05-29 00:01:13.341870 | orchestrator | 00:01:13.341 STDOUT terraform:  + port = (known after apply) 2025-05-29 00:01:13.341913 | orchestrator | 00:01:13.341 STDOUT terraform:  + uuid = (known after apply) 2025-05-29 00:01:13.341936 | orchestrator | 00:01:13.341 STDOUT terraform:  } 2025-05-29 00:01:13.341959 | orchestrator | 00:01:13.341 STDOUT terraform:  } 2025-05-29 00:01:13.342038 | orchestrator | 00:01:13.341 STDOUT terraform:  # openstack_compute_instance_v2.node_server[1] will be created 2025-05-29 00:01:13.342135 | orchestrator | 00:01:13.342 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-05-29 00:01:13.342193 | orchestrator | 00:01:13.342 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-05-29 00:01:13.342239 | orchestrator | 00:01:13.342 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-05-29 00:01:13.342282 | orchestrator | 00:01:13.342 STDOUT terraform:  + all_metadata = (known after apply) 2025-05-29 00:01:13.342325 | orchestrator | 00:01:13.342 STDOUT terraform:  + all_tags = (known after apply) 2025-05-29 00:01:13.342360 | orchestrator | 00:01:13.342 STDOUT terraform:  + availability_zone = "nova" 2025-05-29 00:01:13.342387 | orchestrator | 00:01:13.342 STDOUT terraform:  + config_drive = true 2025-05-29 00:01:13.342432 | orchestrator | 00:01:13.342 STDOUT terraform:  + created = (known after apply) 2025-05-29 00:01:13.342474 | orchestrator | 00:01:13.342 STDOUT terraform:  + flavor_id = (known after apply) 2025-05-29 00:01:13.342511 | orchestrator | 00:01:13.342 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-05-29 00:01:13.342544 | orchestrator | 00:01:13.342 STDOUT terraform:  + force_delete = false 2025-05-29 00:01:13.342585 | orchestrator | 00:01:13.342 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-05-29 00:01:13.342627 | orchestrator | 00:01:13.342 STDOUT terraform:  + id = (known after apply) 2025-05-29 00:01:13.342669 | orchestrator | 00:01:13.342 STDOUT terraform:  + image_id = (known after apply) 2025-05-29 00:01:13.342711 | orchestrator | 00:01:13.342 STDOUT terraform:  + image_name = (known after apply) 2025-05-29 00:01:13.342755 | orchestrator | 00:01:13.342 STDOUT terraform:  + key_pair = "testbed" 2025-05-29 00:01:13.342795 | orchestrator | 00:01:13.342 STDOUT terraform:  + name = "testbed-node-1" 2025-05-29 00:01:13.342827 | orchestrator | 00:01:13.342 STDOUT terraform:  + power_state = "active" 2025-05-29 00:01:13.342870 | orchestrator | 00:01:13.342 STDOUT terraform:  + region = (known after apply) 2025-05-29 00:01:13.342914 | orchestrator | 00:01:13.342 STDOUT terraform:  + security_groups = (known after apply) 2025-05-29 00:01:13.342945 | orchestrator | 00:01:13.342 STDOUT terraform:  + stop_before_destroy = false 2025-05-29 00:01:13.342986 | orchestrator | 00:01:13.342 STDOUT terraform:  + updated = (known after apply) 2025-05-29 00:01:13.343045 | orchestrator | 00:01:13.342 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-05-29 00:01:13.343105 | orchestrator | 00:01:13.343 STDOUT terraform:  + block_device { 2025-05-29 00:01:13.343142 | orchestrator | 00:01:13.343 STDOUT terraform:  + boot_index = 0 2025-05-29 00:01:13.343178 | orchestrator | 00:01:13.343 STDOUT terraform:  + delete_on_termination = false 2025-05-29 00:01:13.343215 | orchestrator | 00:01:13.343 STDOUT terraform:  + destination_type = "volume" 2025-05-29 00:01:13.343254 | orchestrator | 00:01:13.343 STDOUT terraform:  + multiattach = false 2025-05-29 00:01:13.343297 | orchestrator | 00:01:13.343 STDOUT terraform:  + source_type = "volume" 2025-05-29 00:01:13.343346 | orchestrator | 00:01:13.343 STDOUT terraform:  + uuid = (known after apply) 2025-05-29 00:01:13.343368 | orchestrator | 00:01:13.343 STDOUT terraform:  } 2025-05-29 00:01:13.343391 | orchestrator | 00:01:13.343 STDOUT terraform:  + network { 2025-05-29 00:01:13.343419 | orchestrator | 00:01:13.343 STDOUT terraform:  + access_network = false 2025-05-29 00:01:13.343459 | orchestrator | 00:01:13.343 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-05-29 00:01:13.343500 | orchestrator | 00:01:13.343 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-05-29 00:01:13.343540 | orchestrator | 00:01:13.343 STDOUT terraform:  + mac = (known after apply) 2025-05-29 00:01:13.343579 | orchestrator | 00:01:13.343 STDOUT terraform:  + name = (known after apply) 2025-05-29 00:01:13.343619 | orchestrator | 00:01:13.343 STDOUT terraform:  + port = (known after apply) 2025-05-29 00:01:13.343658 | orchestrator | 00:01:13.343 STDOUT terraform:  + uuid = (known after apply) 2025-05-29 00:01:13.343679 | orchestrator | 00:01:13.343 STDOUT terraform:  } 2025-05-29 00:01:13.343700 | orchestrator | 00:01:13.343 STDOUT terraform:  } 2025-05-29 00:01:13.343749 | orchestrator | 00:01:13.343 STDOUT terraform:  # openstack_compute_instance_v2.node_server[2] will be created 2025-05-29 00:01:13.343798 | orchestrator | 00:01:13.343 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-05-29 00:01:13.343842 | orchestrator | 00:01:13.343 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-05-29 00:01:13.343884 | orchestrator | 00:01:13.343 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-05-29 00:01:13.343933 | orchestrator | 00:01:13.343 STDOUT terraform:  + all_metadata = (known after apply) 2025-05-29 00:01:13.343975 | orchestrator | 00:01:13.343 STDOUT terraform:  + all_tags = (known after apply) 2025-05-29 00:01:13.344006 | orchestrator | 00:01:13.343 STDOUT terraform:  + availability_zone = "nova" 2025-05-29 00:01:13.344036 | orchestrator | 00:01:13.344 STDOUT terraform:  + config_drive = true 2025-05-29 00:01:13.344109 | orchestrator | 00:01:13.344 STDOUT terraform:  + created = (known after apply) 2025-05-29 00:01:13.344156 | orchestrator | 00:01:13.344 STDOUT terraform:  + flavor_id = (known after apply) 2025-05-29 00:01:13.344210 | orchestrator | 00:01:13.344 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-05-29 00:01:13.344242 | orchestrator | 00:01:13.344 STDOUT terraform:  + force_delete = false 2025-05-29 00:01:13.344284 | orchestrator | 00:01:13.344 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-05-29 00:01:13.344329 | orchestrator | 00:01:13.344 STDOUT terraform:  + id = (known after apply) 2025-05-29 00:01:13.344373 | orchestrator | 00:01:13.344 STDOUT terraform:  + image_id = (known after apply) 2025-05-29 00:01:13.344415 | orchestrator | 00:01:13.344 STDOUT terraform:  + image_name = (known after apply) 2025-05-29 00:01:13.344449 | orchestrator | 00:01:13.344 STDOUT terraform:  + key_pair = "testbed" 2025-05-29 00:01:13.344487 | orchestrator | 00:01:13.344 STDOUT terraform:  + name = "testbed-node-2" 2025-05-29 00:01:13.344519 | orchestrator | 00:01:13.344 STDOUT terraform:  + power_state = "active" 2025-05-29 00:01:13.344561 | orchestrator | 00:01:13.344 STDOUT terraform:  + region = (known after apply) 2025-05-29 00:01:13.344604 | orchestrator | 00:01:13.344 STDOUT terraform:  + security_groups = (known after apply) 2025-05-29 00:01:13.344636 | orchestrator | 00:01:13.344 STDOUT terraform:  + stop_before_destroy = false 2025-05-29 00:01:13.344677 | orchestrator | 00:01:13.344 STDOUT terraform:  + updated = (known after apply) 2025-05-29 00:01:13.344734 | orchestrator | 00:01:13.344 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-05-29 00:01:13.344760 | orchestrator | 00:01:13.344 STDOUT terraform:  + block_device { 2025-05-29 00:01:13.344791 | orchestrator | 00:01:13.344 STDOUT terraform:  + boot_index = 0 2025-05-29 00:01:13.344826 | orchestrator | 00:01:13.344 STDOUT terraform:  + delete_on_termination = false 2025-05-29 00:01:13.344865 | orchestrator | 00:01:13.344 STDOUT terraform:  + destination_type = "volume" 2025-05-29 00:01:13.344902 | orchestrator | 00:01:13.344 STDOUT terraform:  + multiattach = false 2025-05-29 00:01:13.344938 | orchestrator | 00:01:13.344 STDOUT terraform:  + source_type = "volume" 2025-05-29 00:01:13.344984 | orchestrator | 00:01:13.344 STDOUT terraform:  + uuid = (known after apply) 2025-05-29 00:01:13.345007 | orchestrator | 00:01:13.344 STDOUT terraform:  } 2025-05-29 00:01:13.345029 | orchestrator | 00:01:13.345 STDOUT terraform:  + network { 2025-05-29 00:01:13.345057 | orchestrator | 00:01:13.345 STDOUT terraform:  + access_network = false 2025-05-29 00:01:13.345117 | orchestrator | 00:01:13.345 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-05-29 00:01:13.345155 | orchestrator | 00:01:13.345 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-05-29 00:01:13.345193 | orchestrator | 00:01:13.345 STDOUT terraform:  + mac = (known after apply) 2025-05-29 00:01:13.345231 | orchestrator | 00:01:13.345 STDOUT terraform:  + name = (known after apply) 2025-05-29 00:01:13.345269 | orchestrator | 00:01:13.345 STDOUT terraform:  + port = (known after apply) 2025-05-29 00:01:13.345309 | orchestrator | 00:01:13.345 STDOUT terraform:  + uuid = (known after apply) 2025-05-29 00:01:13.345330 | orchestrator | 00:01:13.345 STDOUT terraform:  } 2025-05-29 00:01:13.345351 | orchestrator | 00:01:13.345 STDOUT terraform:  } 2025-05-29 00:01:13.345403 | orchestrator | 00:01:13.345 STDOUT terraform:  # openstack_compute_instance_v2.node_server[3] will be created 2025-05-29 00:01:13.345454 | orchestrator | 00:01:13.345 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-05-29 00:01:13.345498 | orchestrator | 00:01:13.345 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-05-29 00:01:13.345539 | orchestrator | 00:01:13.345 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-05-29 00:01:13.345586 | orchestrator | 00:01:13.345 STDOUT terraform:  + all_metadata = (known after apply) 2025-05-29 00:01:13.345628 | orchestrator | 00:01:13.345 STDOUT terraform:  + all_tags = (known after apply) 2025-05-29 00:01:13.345658 | orchestrator | 00:01:13.345 STDOUT terraform:  + availability_zone = "nova" 2025-05-29 00:01:13.345686 | orchestrator | 00:01:13.345 STDOUT terraform:  + config_drive = true 2025-05-29 00:01:13.345732 | orchestrator | 00:01:13.345 STDOUT terraform:  + created = (known after apply) 2025-05-29 00:01:13.345777 | orchestrator | 00:01:13.345 STDOUT terraform:  + flavor_id = (known after apply) 2025-05-29 00:01:13.345813 | orchestrator | 00:01:13.345 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-05-29 00:01:13.350324 | orchestrator | 00:01:13.345 STDOUT terraform:  + force_delete = false 2025-05-29 00:01:13.350416 | orchestrator | 00:01:13.345 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-05-29 00:01:13.350434 | orchestrator | 00:01:13.345 STDOUT terraform:  + id = (known after apply) 2025-05-29 00:01:13.350447 | orchestrator | 00:01:13.345 STDOUT terraform:  + image_id = (known after apply) 2025-05-29 00:01:13.350459 | orchestrator | 00:01:13.345 STDOUT terraform:  + image_name = (known after apply) 2025-05-29 00:01:13.350471 | orchestrator | 00:01:13.346 STDOUT terraform:  + key_pair = "testbed" 2025-05-29 00:01:13.350482 | orchestrator | 00:01:13.346 STDOUT terraform:  + name = "testbed-node-3" 2025-05-29 00:01:13.350494 | orchestrator | 00:01:13.346 STDOUT terraform:  + power_state = "active" 2025-05-29 00:01:13.350505 | orchestrator | 00:01:13.346 STDOUT terraform:  + region = (known after apply) 2025-05-29 00:01:13.350517 | orchestrator | 00:01:13.346 STDOUT terraform:  + security_groups = (known after apply) 2025-05-29 00:01:13.350528 | orchestrator | 00:01:13.346 STDOUT terraform:  + stop_before_destroy = false 2025-05-29 00:01:13.350571 | orchestrator | 00:01:13.346 STDOUT terraform:  + updated = (known after apply) 2025-05-29 00:01:13.350584 | orchestrator | 00:01:13.346 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-05-29 00:01:13.350596 | orchestrator | 00:01:13.346 STDOUT terraform:  + block_device { 2025-05-29 00:01:13.350607 | orchestrator | 00:01:13.346 STDOUT terraform:  + boot_index = 0 2025-05-29 00:01:13.350619 | orchestrator | 00:01:13.346 STDOUT terraform:  + delete_on_termination = false 2025-05-29 00:01:13.350630 | orchestrator | 00:01:13.346 STDOUT terraform:  + destination_type = "volume" 2025-05-29 00:01:13.350641 | orchestrator | 00:01:13.346 STDOUT terraform:  + multiattach = false 2025-05-29 00:01:13.350653 | orchestrator | 00:01:13.346 STDOUT terraform:  + source_type = "volume" 2025-05-29 00:01:13.350664 | orchestrator | 00:01:13.346 STDOUT terraform:  + uuid = (known after apply) 2025-05-29 00:01:13.350676 | orchestrator | 00:01:13.346 STDOUT terraform:  } 2025-05-29 00:01:13.350687 | orchestrator | 00:01:13.346 STDOUT terraform:  + network { 2025-05-29 00:01:13.350699 | orchestrator | 00:01:13.346 STDOUT terraform:  + access_network = false 2025-05-29 00:01:13.350710 | orchestrator | 00:01:13.346 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-05-29 00:01:13.350722 | orchestrator | 00:01:13.346 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-05-29 00:01:13.350733 | orchestrator | 00:01:13.346 STDOUT terraform:  + mac = (known after apply) 2025-05-29 00:01:13.350745 | orchestrator | 00:01:13.346 STDOUT terraform:  + name = (known after apply) 2025-05-29 00:01:13.350756 | orchestrator | 00:01:13.346 STDOUT terraform:  + port = (known after apply) 2025-05-29 00:01:13.350767 | orchestrator | 00:01:13.346 STDOUT terraform:  + uuid = (known after apply) 2025-05-29 00:01:13.350779 | orchestrator | 00:01:13.346 STDOUT terraform:  } 2025-05-29 00:01:13.350790 | orchestrator | 00:01:13.346 STDOUT terraform:  } 2025-05-29 00:01:13.350802 | orchestrator | 00:01:13.346 STDOUT terraform:  # openstack_compute_instance_v2.node_server[4] will be created 2025-05-29 00:01:13.350813 | orchestrator | 00:01:13.346 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-05-29 00:01:13.350824 | orchestrator | 00:01:13.346 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-05-29 00:01:13.350836 | orchestrator | 00:01:13.346 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-05-29 00:01:13.350847 | orchestrator | 00:01:13.346 STDOUT terraform:  + all_metadata = (known after apply) 2025-05-29 00:01:13.350858 | orchestrator | 00:01:13.346 STDOUT terraform:  + all_tags = (known after apply) 2025-05-29 00:01:13.350870 | orchestrator | 00:01:13.346 STDOUT terraform:  + availability_zone = "nova" 2025-05-29 00:01:13.350901 | orchestrator | 00:01:13.346 STDOUT terraform:  + config_drive = true 2025-05-29 00:01:13.350914 | orchestrator | 00:01:13.346 STDOUT terraform:  + created = (known after apply) 2025-05-29 00:01:13.350926 | orchestrator | 00:01:13.346 STDOUT terraform:  + flavor_id = (known after apply) 2025-05-29 00:01:13.350945 | orchestrator | 00:01:13.346 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-05-29 00:01:13.350956 | orchestrator | 00:01:13.347 STDOUT terraform:  + force_delete = false 2025-05-29 00:01:13.350968 | orchestrator | 00:01:13.347 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-05-29 00:01:13.350980 | orchestrator | 00:01:13.347 STDOUT terraform:  + id = (known after apply) 2025-05-29 00:01:13.350992 | orchestrator | 00:01:13.347 STDOUT terraform:  + image_id = (known after apply) 2025-05-29 00:01:13.351003 | orchestrator | 00:01:13.347 STDOUT terraform:  + image_name = (known after apply) 2025-05-29 00:01:13.351015 | orchestrator | 00:01:13.347 STDOUT terraform:  + key_pair = "testbed" 2025-05-29 00:01:13.351026 | orchestrator | 00:01:13.347 STDOUT terraform:  + name = "testbed-node-4" 2025-05-29 00:01:13.351038 | orchestrator | 00:01:13.347 STDOUT terraform:  + power_state = "active" 2025-05-29 00:01:13.351049 | orchestrator | 00:01:13.347 STDOUT terraform:  + region = (known after apply) 2025-05-29 00:01:13.351060 | orchestrator | 00:01:13.347 STDOUT terraform:  + security_groups = (known after apply) 2025-05-29 00:01:13.351137 | orchestrator | 00:01:13.347 STDOUT terraform:  + stop_before_destroy = false 2025-05-29 00:01:13.351149 | orchestrator | 00:01:13.347 STDOUT terraform:  + updated = (known after apply) 2025-05-29 00:01:13.351161 | orchestrator | 00:01:13.347 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-05-29 00:01:13.351172 | orchestrator | 00:01:13.347 STDOUT terraform:  + block_device { 2025-05-29 00:01:13.351184 | orchestrator | 00:01:13.347 STDOUT terraform:  + boot_index = 0 2025-05-29 00:01:13.351196 | orchestrator | 00:01:13.347 STDOUT terraform:  + delete_on_termination = false 2025-05-29 00:01:13.351207 | orchestrator | 00:01:13.347 STDOUT terraform:  + destination_type = "volume" 2025-05-29 00:01:13.351226 | orchestrator | 00:01:13.347 STDOUT terraform:  + multiattach = false 2025-05-29 00:01:13.351238 | orchestrator | 00:01:13.347 STDOUT terraform:  + source_type = "volume" 2025-05-29 00:01:13.351249 | orchestrator | 00:01:13.347 STDOUT terraform:  + uuid = (known after apply) 2025-05-29 00:01:13.351261 | orchestrator | 00:01:13.347 STDOUT terraform:  } 2025-05-29 00:01:13.351272 | orchestrator | 00:01:13.347 STDOUT terraform:  + network { 2025-05-29 00:01:13.351284 | orchestrator | 00:01:13.347 STDOUT terraform:  + access_network = false 2025-05-29 00:01:13.351295 | orchestrator | 00:01:13.347 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-05-29 00:01:13.351307 | orchestrator | 00:01:13.347 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-05-29 00:01:13.351318 | orchestrator | 00:01:13.347 STDOUT terraform:  + mac = (known after apply) 2025-05-29 00:01:13.351329 | orchestrator | 00:01:13.347 STDOUT terraform:  + name = (known after apply) 2025-05-29 00:01:13.351341 | orchestrator | 00:01:13.347 STDOUT terraform:  + port = (known after apply) 2025-05-29 00:01:13.351352 | orchestrator | 00:01:13.347 STDOUT terraform:  + uuid = (known after apply) 2025-05-29 00:01:13.351373 | orchestrator | 00:01:13.347 STDOUT terraform:  } 2025-05-29 00:01:13.351384 | orchestrator | 00:01:13.347 STDOUT terraform:  } 2025-05-29 00:01:13.351396 | orchestrator | 00:01:13.347 STDOUT terraform:  # openstack_compute_instance_v2.node_server[5] will be created 2025-05-29 00:01:13.351407 | orchestrator | 00:01:13.347 STDOUT terraform:  + resource "openstack_compute_instance_v2" "node_server" { 2025-05-29 00:01:13.351419 | orchestrator | 00:01:13.347 STDOUT terraform:  + access_ip_v4 = (known after apply) 2025-05-29 00:01:13.351438 | orchestrator | 00:01:13.347 STDOUT terraform:  + access_ip_v6 = (known after apply) 2025-05-29 00:01:13.351450 | orchestrator | 00:01:13.347 STDOUT terraform:  + all_metadata = (known after apply) 2025-05-29 00:01:13.351462 | orchestrator | 00:01:13.348 STDOUT terraform:  + all_tags = (known after apply) 2025-05-29 00:01:13.351473 | orchestrator | 00:01:13.348 STDOUT terraform:  + availability_zone = "nova" 2025-05-29 00:01:13.351485 | orchestrator | 00:01:13.348 STDOUT terraform:  + config_drive = true 2025-05-29 00:01:13.351496 | orchestrator | 00:01:13.348 STDOUT terraform:  + created = (known after apply) 2025-05-29 00:01:13.351507 | orchestrator | 00:01:13.348 STDOUT terraform:  + flavor_id = (known after apply) 2025-05-29 00:01:13.351519 | orchestrator | 00:01:13.348 STDOUT terraform:  + flavor_name = "OSISM-8V-32" 2025-05-29 00:01:13.351530 | orchestrator | 00:01:13.348 STDOUT terraform:  + force_delete = false 2025-05-29 00:01:13.351542 | orchestrator | 00:01:13.348 STDOUT terraform:  + hypervisor_hostname = (known after apply) 2025-05-29 00:01:13.351553 | orchestrator | 00:01:13.348 STDOUT terraform:  + id = (known after apply) 2025-05-29 00:01:13.351569 | orchestrator | 00:01:13.348 STDOUT terraform:  + image_id = (known after apply) 2025-05-29 00:01:13.351581 | orchestrator | 00:01:13.348 STDOUT terraform:  + image_name = (known after apply) 2025-05-29 00:01:13.351592 | orchestrator | 00:01:13.348 STDOUT terraform:  + key_pair = "testbed" 2025-05-29 00:01:13.351604 | orchestrator | 00:01:13.348 STDOUT terraform:  + name = "testbed-node-5" 2025-05-29 00:01:13.351615 | orchestrator | 00:01:13.348 STDOUT terraform:  + power_state = "active" 2025-05-29 00:01:13.351627 | orchestrator | 00:01:13.348 STDOUT terraform:  + region = (known after apply) 2025-05-29 00:01:13.351638 | orchestrator | 00:01:13.348 STDOUT terraform:  + security_groups = (known after apply) 2025-05-29 00:01:13.351649 | orchestrator | 00:01:13.348 STDOUT terraform:  + stop_before_destroy = false 2025-05-29 00:01:13.351661 | orchestrator | 00:01:13.348 STDOUT terraform:  + updated = (known after apply) 2025-05-29 00:01:13.351672 | orchestrator | 00:01:13.348 STDOUT terraform:  + user_data = "ae09e46b224a6ca206a9ed4f8f8a4f8520827854" 2025-05-29 00:01:13.351684 | orchestrator | 00:01:13.348 STDOUT terraform:  + block_device { 2025-05-29 00:01:13.351695 | orchestrator | 00:01:13.348 STDOUT terraform:  + boot_index = 0 2025-05-29 00:01:13.351707 | orchestrator | 00:01:13.348 STDOUT terraform:  + delete_on_termination = false 2025-05-29 00:01:13.351718 | orchestrator | 00:01:13.348 STDOUT terraform:  + destination_type = "volume" 2025-05-29 00:01:13.351736 | orchestrator | 00:01:13.348 STDOUT terraform:  + multiattach = false 2025-05-29 00:01:13.351748 | orchestrator | 00:01:13.348 STDOUT terraform:  + source_type = "volume" 2025-05-29 00:01:13.351759 | orchestrator | 00:01:13.348 STDOUT terraform:  + uuid = (known after apply) 2025-05-29 00:01:13.351771 | orchestrator | 00:01:13.348 STDOUT terraform:  } 2025-05-29 00:01:13.351782 | orchestrator | 00:01:13.348 STDOUT terraform:  + network { 2025-05-29 00:01:13.351793 | orchestrator | 00:01:13.348 STDOUT terraform:  + access_network = false 2025-05-29 00:01:13.351805 | orchestrator | 00:01:13.348 STDOUT terraform:  + fixed_ip_v4 = (known after apply) 2025-05-29 00:01:13.351816 | orchestrator | 00:01:13.348 STDOUT terraform:  + fixed_ip_v6 = (known after apply) 2025-05-29 00:01:13.351827 | orchestrator | 00:01:13.348 STDOUT terraform:  + mac = (known after apply) 2025-05-29 00:01:13.351839 | orchestrator | 00:01:13.348 STDOUT terraform:  + name = (known after apply) 2025-05-29 00:01:13.351850 | orchestrator | 00:01:13.348 STDOUT terraform:  + port = (known after apply) 2025-05-29 00:01:13.351862 | orchestrator | 00:01:13.348 STDOUT terraform:  + uuid = (known after apply) 2025-05-29 00:01:13.351873 | orchestrator | 00:01:13.348 STDOUT terraform:  } 2025-05-29 00:01:13.351897 | orchestrator | 00:01:13.348 STDOUT terraform:  } 2025-05-29 00:01:13.351909 | orchestrator | 00:01:13.348 STDOUT terraform:  # openstack_compute_keypair_v2.key will be created 2025-05-29 00:01:13.351921 | orchestrator | 00:01:13.348 STDOUT terraform:  + resource "openstack_compute_keypair_v2" "key" { 2025-05-29 00:01:13.351932 | orchestrator | 00:01:13.348 STDOUT terraform:  + fingerprint = (known after apply) 2025-05-29 00:01:13.351944 | orchestrator | 00:01:13.349 STDOUT terraform:  + id = (known after apply) 2025-05-29 00:01:13.351955 | orchestrator | 00:01:13.349 STDOUT terraform:  + name = "testbed" 2025-05-29 00:01:13.351966 | orchestrator | 00:01:13.349 STDOUT terraform:  + private_key = (sensitive value) 2025-05-29 00:01:13.351978 | orchestrator | 00:01:13.349 STDOUT terraform:  + public_key = (known after apply) 2025-05-29 00:01:13.351989 | orchestrator | 00:01:13.349 STDOUT terraform:  + region = (known after apply) 2025-05-29 00:01:13.352000 | orchestrator | 00:01:13.349 STDOUT terraform:  + user_id = (known after apply) 2025-05-29 00:01:13.352012 | orchestrator | 00:01:13.349 STDOUT terraform:  } 2025-05-29 00:01:13.352023 | orchestrator | 00:01:13.349 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[0] will be created 2025-05-29 00:01:13.352035 | orchestrator | 00:01:13.349 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-29 00:01:13.352046 | orchestrator | 00:01:13.349 STDOUT terraform:  + device = (known after apply) 2025-05-29 00:01:13.352058 | orchestrator | 00:01:13.349 STDOUT terraform:  + id = (known after apply) 2025-05-29 00:01:13.352115 | orchestrator | 00:01:13.349 STDOUT terraform:  + instance_id = (known after apply) 2025-05-29 00:01:13.352136 | orchestrator | 00:01:13.349 STDOUT terraform:  + region = (known after apply) 2025-05-29 00:01:13.352155 | orchestrator | 00:01:13.349 STDOUT terraform:  + volume_id = (known after apply) 2025-05-29 00:01:13.352178 | orchestrator | 00:01:13.349 STDOUT terraform:  } 2025-05-29 00:01:13.352190 | orchestrator | 00:01:13.349 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[1] will be created 2025-05-29 00:01:13.352208 | orchestrator | 00:01:13.349 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-29 00:01:13.352220 | orchestrator | 00:01:13.349 STDOUT terraform:  + device = (known after apply) 2025-05-29 00:01:13.352231 | orchestrator | 00:01:13.349 STDOUT terraform:  + id = (known after apply) 2025-05-29 00:01:13.352243 | orchestrator | 00:01:13.349 STDOUT terraform:  + instance_id = (known after apply) 2025-05-29 00:01:13.352254 | orchestrator | 00:01:13.349 STDOUT terraform:  + region = (known after apply) 2025-05-29 00:01:13.352265 | orchestrator | 00:01:13.349 STDOUT terraform:  + volume_id = (known after apply) 2025-05-29 00:01:13.352277 | orchestrator | 00:01:13.349 STDOUT terraform:  } 2025-05-29 00:01:13.352289 | orchestrator | 00:01:13.349 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[2] will be created 2025-05-29 00:01:13.352300 | orchestrator | 00:01:13.349 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-29 00:01:13.352312 | orchestrator | 00:01:13.349 STDOUT terraform:  + device = (known after apply) 2025-05-29 00:01:13.352323 | orchestrator | 00:01:13.349 STDOUT terraform:  + id = (known after apply) 2025-05-29 00:01:13.352334 | orchestrator | 00:01:13.349 STDOUT terraform:  + instance_id = (known after apply) 2025-05-29 00:01:13.352345 | orchestrator | 00:01:13.349 STDOUT terraform:  + region = (known after apply) 2025-05-29 00:01:13.352357 | orchestrator | 00:01:13.349 STDOUT terraform:  + volume_id = (known after apply) 2025-05-29 00:01:13.352368 | orchestrator | 00:01:13.349 STDOUT terraform:  } 2025-05-29 00:01:13.352380 | orchestrator | 00:01:13.349 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[3] will be created 2025-05-29 00:01:13.352391 | orchestrator | 00:01:13.349 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-29 00:01:13.352403 | orchestrator | 00:01:13.350 STDOUT terraform:  + device = (known after apply) 2025-05-29 00:01:13.352425 | orchestrator | 00:01:13.350 STDOUT terraform:  + id = (known after apply) 2025-05-29 00:01:13.352437 | orchestrator | 00:01:13.350 STDOUT terraform:  + instance_id = (known after apply) 2025-05-29 00:01:13.352448 | orchestrator | 00:01:13.350 STDOUT terraform:  + region = (known after apply) 2025-05-29 00:01:13.352459 | orchestrator | 00:01:13.350 STDOUT terraform:  + volume_id = (known after apply) 2025-05-29 00:01:13.352471 | orchestrator | 00:01:13.350 STDOUT terraform:  } 2025-05-29 00:01:13.352482 | orchestrator | 00:01:13.350 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[4] will be created 2025-05-29 00:01:13.352494 | orchestrator | 00:01:13.350 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-29 00:01:13.352505 | orchestrator | 00:01:13.350 STDOUT terraform:  + device = (known after apply) 2025-05-29 00:01:13.352517 | orchestrator | 00:01:13.350 STDOUT terraform:  + id = (known after apply) 2025-05-29 00:01:13.352535 | orchestrator | 00:01:13.350 STDOUT terraform:  + instance_id = (known after apply) 2025-05-29 00:01:13.352546 | orchestrator | 00:01:13.350 STDOUT terraform:  + region = (known after apply) 2025-05-29 00:01:13.352564 | orchestrator | 00:01:13.350 STDOUT terraform:  + volume_id = (known after apply) 2025-05-29 00:01:13.352575 | orchestrator | 00:01:13.350 STDOUT terraform:  } 2025-05-29 00:01:13.352587 | orchestrator | 00:01:13.350 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[5] will be created 2025-05-29 00:01:13.352598 | orchestrator | 00:01:13.350 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-29 00:01:13.352610 | orchestrator | 00:01:13.350 STDOUT terraform:  + device = (known after apply) 2025-05-29 00:01:13.352621 | orchestrator | 00:01:13.350 STDOUT terraform:  + id = (known after apply) 2025-05-29 00:01:13.352633 | orchestrator | 00:01:13.350 STDOUT terraform:  + instance_id = (known after apply) 2025-05-29 00:01:13.352644 | orchestrator | 00:01:13.350 STDOUT terraform:  + region = (known after apply) 2025-05-29 00:01:13.352655 | orchestrator | 00:01:13.350 STDOUT terraform:  + volume_id = (known after apply) 2025-05-29 00:01:13.352666 | orchestrator | 00:01:13.350 STDOUT terraform:  } 2025-05-29 00:01:13.352678 | orchestrator | 00:01:13.350 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[6] will be created 2025-05-29 00:01:13.352689 | orchestrator | 00:01:13.350 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-29 00:01:13.352701 | orchestrator | 00:01:13.350 STDOUT terraform:  + device = (known after apply) 2025-05-29 00:01:13.352712 | orchestrator | 00:01:13.350 STDOUT terraform:  + id = (known after apply) 2025-05-29 00:01:13.352724 | orchestrator | 00:01:13.350 STDOUT terraform:  + instance_id = (known after apply) 2025-05-29 00:01:13.352735 | orchestrator | 00:01:13.350 STDOUT terraform:  + region = (known after apply) 2025-05-29 00:01:13.352746 | orchestrator | 00:01:13.350 STDOUT terraform:  + volume_id = (known after apply) 2025-05-29 00:01:13.352758 | orchestrator | 00:01:13.350 STDOUT terraform:  } 2025-05-29 00:01:13.352769 | orchestrator | 00:01:13.350 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[7] will be created 2025-05-29 00:01:13.352780 | orchestrator | 00:01:13.351 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-29 00:01:13.352792 | orchestrator | 00:01:13.351 STDOUT terraform:  + device = (known after apply) 2025-05-29 00:01:13.352803 | orchestrator | 00:01:13.351 STDOUT terraform:  + id = (known after apply) 2025-05-29 00:01:13.352814 | orchestrator | 00:01:13.351 STDOUT terraform:  + instance_id = (known after apply) 2025-05-29 00:01:13.352826 | orchestrator | 00:01:13.351 STDOUT terraform:  + region = (known after apply) 2025-05-29 00:01:13.352837 | orchestrator | 00:01:13.351 STDOUT terraform:  + volume_id = (known after apply) 2025-05-29 00:01:13.352849 | orchestrator | 00:01:13.351 STDOUT terraform:  } 2025-05-29 00:01:13.352866 | orchestrator | 00:01:13.351 STDOUT terraform:  # openstack_compute_volume_attach_v2.node_volume_attachment[8] will be created 2025-05-29 00:01:13.352885 | orchestrator | 00:01:13.351 STDOUT terraform:  + resource "openstack_compute_volume_attach_v2" "node_volume_attachment" { 2025-05-29 00:01:13.352897 | orchestrator | 00:01:13.351 STDOUT terraform:  + device = (known after apply) 2025-05-29 00:01:13.352908 | orchestrator | 00:01:13.351 STDOUT terraform:  + id = (known after apply) 2025-05-29 00:01:13.352919 | orchestrator | 00:01:13.351 STDOUT terraform:  + instance_id = (known after apply) 2025-05-29 00:01:13.352931 | orchestrator | 00:01:13.351 STDOUT terraform:  + region = (known after apply) 2025-05-29 00:01:13.352942 | orchestrator | 00:01:13.351 STDOUT terraform:  + volume_id = (known after apply) 2025-05-29 00:01:13.352953 | orchestrator | 00:01:13.351 STDOUT terraform:  } 2025-05-29 00:01:13.352965 | orchestrator | 00:01:13.351 STDOUT terraform:  # openstack_networking_floatingip_associate_v2.manager_floating_ip_association will be created 2025-05-29 00:01:13.352977 | orchestrator | 00:01:13.351 STDOUT terraform:  + resource "openstack_networking_floatingip_associate_v2" "manager_floating_ip_association" { 2025-05-29 00:01:13.352988 | orchestrator | 00:01:13.351 STDOUT terraform:  + fixed_ip = (known after apply) 2025-05-29 00:01:13.353000 | orchestrator | 00:01:13.351 STDOUT terraform:  + floating_ip = (known after apply) 2025-05-29 00:01:13.353011 | orchestrator | 00:01:13.351 STDOUT terraform:  + id = (known after apply) 2025-05-29 00:01:13.353023 | orchestrator | 00:01:13.351 STDOUT terraform:  + port_id = (known after apply) 2025-05-29 00:01:13.353041 | orchestrator | 00:01:13.351 STDOUT terraform:  + region = (known after apply) 2025-05-29 00:01:13.353060 | orchestrator | 00:01:13.351 STDOUT terraform:  } 2025-05-29 00:01:13.353102 | orchestrator | 00:01:13.351 STDOUT terraform:  # openstack_networking_floatingip_v2.manager_floating_ip will be created 2025-05-29 00:01:13.353122 | orchestrator | 00:01:13.351 STDOUT terraform:  + resource "openstack_networking_floatingip_v2" "manager_floating_ip" { 2025-05-29 00:01:13.353141 | orchestrator | 00:01:13.351 STDOUT terraform:  + address = (known after apply) 2025-05-29 00:01:13.353158 | orchestrator | 00:01:13.351 STDOUT terraform:  + all_tags = (known after apply) 2025-05-29 00:01:13.353175 | orchestrator | 00:01:13.351 STDOUT terraform:  + dns_domain = (known after apply) 2025-05-29 00:01:13.353194 | orchestrator | 00:01:13.351 STDOUT terraform:  + dns_name = (known after apply) 2025-05-29 00:01:13.353212 | orchestrator | 00:01:13.351 STDOUT terraform:  + fixed_ip = (known after apply) 2025-05-29 00:01:13.353233 | orchestrator | 00:01:13.351 STDOUT terraform:  + id = (known after apply) 2025-05-29 00:01:13.353252 | orchestrator | 00:01:13.351 STDOUT terraform:  + pool = "public" 2025-05-29 00:01:13.353271 | orchestrator | 00:01:13.351 STDOUT terraform:  + port_id = (known after apply) 2025-05-29 00:01:13.353285 | orchestrator | 00:01:13.351 STDOUT terraform:  + region = (known after apply) 2025-05-29 00:01:13.353303 | orchestrator | 00:01:13.352 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-29 00:01:13.353315 | orchestrator | 00:01:13.352 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-29 00:01:13.353326 | orchestrator | 00:01:13.352 STDOUT terraform:  } 2025-05-29 00:01:13.353338 | orchestrator | 00:01:13.352 STDOUT terraform:  # openstack_networking_network_v2.net_management will be created 2025-05-29 00:01:13.353357 | orchestrator | 00:01:13.352 STDOUT terraform:  + resource "openstack_networking_network_v2" "net_management" { 2025-05-29 00:01:13.353369 | orchestrator | 00:01:13.352 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-29 00:01:13.353380 | orchestrator | 00:01:13.352 STDOUT terraform:  + all_tags = (known after apply) 2025-05-29 00:01:13.353392 | orchestrator | 00:01:13.352 STDOUT terraform:  + availability_zone_hints = [ 2025-05-29 00:01:13.353403 | orchestrator | 00:01:13.352 STDOUT terraform:  + "nova", 2025-05-29 00:01:13.353415 | orchestrator | 00:01:13.352 STDOUT terraform:  ] 2025-05-29 00:01:13.353435 | orchestrator | 00:01:13.352 STDOUT terraform:  + dns_domain = (known after apply) 2025-05-29 00:01:13.353447 | orchestrator | 00:01:13.352 STDOUT terraform:  + external = (known after apply) 2025-05-29 00:01:13.353458 | orchestrator | 00:01:13.352 STDOUT terraform:  + id = (known after apply) 2025-05-29 00:01:13.353470 | orchestrator | 00:01:13.352 STDOUT terraform:  + mtu = (known after apply) 2025-05-29 00:01:13.353481 | orchestrator | 00:01:13.352 STDOUT terraform:  + name = "net-testbed-management" 2025-05-29 00:01:13.353492 | orchestrator | 00:01:13.352 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-05-29 00:01:13.353504 | orchestrator | 00:01:13.352 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-05-29 00:01:13.353515 | orchestrator | 00:01:13.352 STDOUT terraform:  + region = (known after apply) 2025-05-29 00:01:13.353526 | orchestrator | 00:01:13.352 STDOUT terraform:  + shared = (known after apply) 2025-05-29 00:01:13.353538 | orchestrator | 00:01:13.352 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-29 00:01:13.353549 | orchestrator | 00:01:13.352 STDOUT terraform:  + transparent_vlan = (known after apply) 2025-05-29 00:01:13.353570 | orchestrator | 00:01:13.352 STDOUT terraform:  + segments (known after apply) 2025-05-29 00:01:13.353582 | orchestrator | 00:01:13.352 STDOUT terraform:  } 2025-05-29 00:01:13.353594 | orchestrator | 00:01:13.352 STDOUT terraform:  # openstack_networking_port_v2.manager_port_management will be created 2025-05-29 00:01:13.353605 | orchestrator | 00:01:13.352 STDOUT terraform:  + resource "openstack_networking_port_v2" "manager_port_management" { 2025-05-29 00:01:13.353616 | orchestrator | 00:01:13.352 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-29 00:01:13.353628 | orchestrator | 00:01:13.352 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-05-29 00:01:13.353639 | orchestrator | 00:01:13.352 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-05-29 00:01:13.353650 | orchestrator | 00:01:13.352 STDOUT terraform:  + all_tags = (known after apply) 2025-05-29 00:01:13.353662 | orchestrator | 00:01:13.352 STDOUT terraform:  + device_id = (known after apply) 2025-05-29 00:01:13.353673 | orchestrator | 00:01:13.352 STDOUT terraform:  + device_owner = (known after apply) 2025-05-29 00:01:13.353685 | orchestrator | 00:01:13.353 STDOUT terraform:  + dns_assignment = (known after apply) 2025-05-29 00:01:13.353696 | orchestrator | 00:01:13.353 STDOUT terraform:  + dns_name = (known after apply) 2025-05-29 00:01:13.353730 | orchestrator | 00:01:13.353 STDOUT terraform:  + id = (known after apply) 2025-05-29 00:01:13.353743 | orchestrator | 00:01:13.353 STDOUT terraform:  + mac_address = (known after apply) 2025-05-29 00:01:13.353765 | orchestrator | 00:01:13.353 STDOUT terraform:  + network_id = (known after apply) 2025-05-29 00:01:13.353777 | orchestrator | 00:01:13.353 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-05-29 00:01:13.353788 | orchestrator | 00:01:13.353 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-05-29 00:01:13.353800 | orchestrator | 00:01:13.353 STDOUT terraform:  + region = (known after apply) 2025-05-29 00:01:13.353811 | orchestrator | 00:01:13.353 STDOUT terraform:  + security_group_ids = (known after apply) 2025-05-29 00:01:13.353822 | orchestrator | 00:01:13.353 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-29 00:01:13.353834 | orchestrator | 00:01:13.353 STDOUT terraform:  + allowed_address_pairs { 2025-05-29 00:01:13.353845 | orchestrator | 00:01:13.353 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-05-29 00:01:13.353862 | orchestrator | 00:01:13.353 STDOUT terraform:  } 2025-05-29 00:01:13.353874 | orchestrator | 00:01:13.353 STDOUT terraform:  + allowed_address_pairs { 2025-05-29 00:01:13.353886 | orchestrator | 00:01:13.353 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-05-29 00:01:13.353897 | orchestrator | 00:01:13.353 STDOUT terraform:  } 2025-05-29 00:01:13.353908 | orchestrator | 00:01:13.353 STDOUT terraform:  + binding (known after apply) 2025-05-29 00:01:13.353920 | orchestrator | 00:01:13.353 STDOUT terraform:  + fixed_ip { 2025-05-29 00:01:13.353931 | orchestrator | 00:01:13.353 STDOUT terraform:  + ip_address = "192.168.16.5" 2025-05-29 00:01:13.353943 | orchestrator | 00:01:13.353 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-29 00:01:13.353954 | orchestrator | 00:01:13.353 STDOUT terraform:  } 2025-05-29 00:01:13.353965 | orchestrator | 00:01:13.353 STDOUT terraform:  } 2025-05-29 00:01:13.353990 | orchestrator | 00:01:13.353 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[0] will be created 2025-05-29 00:01:13.354002 | orchestrator | 00:01:13.353 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-05-29 00:01:13.354044 | orchestrator | 00:01:13.353 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-29 00:01:13.354059 | orchestrator | 00:01:13.353 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-05-29 00:01:13.354101 | orchestrator | 00:01:13.353 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-05-29 00:01:13.354113 | orchestrator | 00:01:13.353 STDOUT terraform:  + all_tags = (known after apply) 2025-05-29 00:01:13.354130 | orchestrator | 00:01:13.353 STDOUT terraform:  + device_id = (known after apply) 2025-05-29 00:01:13.354142 | orchestrator | 00:01:13.353 STDOUT terraform:  + device_owner = (known after apply) 2025-05-29 00:01:13.354153 | orchestrator | 00:01:13.353 STDOUT terraform:  + dns_assignment = (known after apply) 2025-05-29 00:01:13.354172 | orchestrator | 00:01:13.353 STDOUT terraform:  + dns_name = (known after apply) 2025-05-29 00:01:13.354183 | orchestrator | 00:01:13.353 STDOUT terraform:  + id = (known after apply) 2025-05-29 00:01:13.354195 | orchestrator | 00:01:13.353 STDOUT terraform:  + mac_address = (known after apply) 2025-05-29 00:01:13.354210 | orchestrator | 00:01:13.354 STDOUT terraform:  + network_id = (known after apply) 2025-05-29 00:01:13.354222 | orchestrator | 00:01:13.354 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-05-29 00:01:13.354239 | orchestrator | 00:01:13.354 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-05-29 00:01:13.354264 | orchestrator | 00:01:13.354 STDOUT terraform:  + region = (known after apply) 2025-05-29 00:01:13.354289 | orchestrator | 00:01:13.354 STDOUT terraform:  + security_group_ids = (known after apply) 2025-05-29 00:01:13.354389 | orchestrator | 00:01:13.354 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-29 00:01:13.354416 | orchestrator | 00:01:13.354 STDOUT terraform:  + allowed_address_pairs { 2025-05-29 00:01:13.354440 | orchestrator | 00:01:13.354 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-05-29 00:01:13.354459 | orchestrator | 00:01:13.354 STDOUT terraform:  } 2025-05-29 00:01:13.354483 | orchestrator | 00:01:13.354 STDOUT terraform:  + allowed_address_pairs { 2025-05-29 00:01:13.354507 | orchestrator | 00:01:13.354 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-05-29 00:01:13.354543 | orchestrator | 00:01:13.354 STDOUT terraform:  } 2025-05-29 00:01:13.354564 | orchestrator | 00:01:13.354 STDOUT terraform:  + allowed_address_pairs { 2025-05-29 00:01:13.354589 | orchestrator | 00:01:13.354 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-05-29 00:01:13.354615 | orchestrator | 00:01:13.354 STDOUT terraform:  } 2025-05-29 00:01:13.354671 | orchestrator | 00:01:13.354 STDOUT terraform:  + allowed_address_pairs { 2025-05-29 00:01:13.354691 | orchestrator | 00:01:13.354 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-05-29 00:01:13.354713 | orchestrator | 00:01:13.354 STDOUT terraform:  } 2025-05-29 00:01:13.354737 | orchestrator | 00:01:13.354 STDOUT terraform:  + binding (known after apply) 2025-05-29 00:01:13.354761 | orchestrator | 00:01:13.354 STDOUT terraform:  + fixed_ip { 2025-05-29 00:01:13.354784 | orchestrator | 00:01:13.354 STDOUT terraform:  + ip_address = "192.168.16.10" 2025-05-29 00:01:13.354809 | orchestrator | 00:01:13.354 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-29 00:01:13.354835 | orchestrator | 00:01:13.354 STDOUT terraform:  } 2025-05-29 00:01:13.354854 | orchestrator | 00:01:13.354 STDOUT terraform:  } 2025-05-29 00:01:13.354893 | orchestrator | 00:01:13.354 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[1] will be created 2025-05-29 00:01:13.354984 | orchestrator | 00:01:13.354 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-05-29 00:01:13.355003 | orchestrator | 00:01:13.354 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-29 00:01:13.355049 | orchestrator | 00:01:13.354 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-05-29 00:01:13.355126 | orchestrator | 00:01:13.355 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-05-29 00:01:13.355149 | orchestrator | 00:01:13.355 STDOUT terraform:  + all_tags = (known after apply) 2025-05-29 00:01:13.355165 | orchestrator | 00:01:13.355 STDOUT terraform:  + device_id = (known after apply) 2025-05-29 00:01:13.355341 | orchestrator | 00:01:13.355 STDOUT terraform:  + device_owner = (known after apply) 2025-05-29 00:01:13.355424 | orchestrator | 00:01:13.355 STDOUT terraform:  + dns_assignment = (known after apply) 2025-05-29 00:01:13.355450 | orchestrator | 00:01:13.355 STDOUT terraform:  + dns_name = (known after apply) 2025-05-29 00:01:13.355462 | orchestrator | 00:01:13.355 STDOUT terraform:  + id = (known after apply) 2025-05-29 00:01:13.355473 | orchestrator | 00:01:13.355 STDOUT terraform:  + mac_address = (known after apply) 2025-05-29 00:01:13.355484 | orchestrator | 00:01:13.355 STDOUT terraform:  + network_id = (known after apply) 2025-05-29 00:01:13.355499 | orchestrator | 00:01:13.355 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-05-29 00:01:13.355513 | orchestrator | 00:01:13.355 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-05-29 00:01:13.355569 | orchestrator | 00:01:13.355 STDOUT terraform:  + region = (known after apply) 2025-05-29 00:01:13.355612 | orchestrator | 00:01:13.355 STDOUT terraform:  + security_group_ids = (known after apply) 2025-05-29 00:01:13.355655 | orchestrator | 00:01:13.355 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-29 00:01:13.355671 | orchestrator | 00:01:13.355 STDOUT terraform:  + allowed_address_pairs { 2025-05-29 00:01:13.355725 | orchestrator | 00:01:13.355 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-05-29 00:01:13.355739 | orchestrator | 00:01:13.355 STDOUT terraform:  } 2025-05-29 00:01:13.355752 | orchestrator | 00:01:13.355 STDOUT terraform:  + allowed_address_pairs { 2025-05-29 00:01:13.355766 | orchestrator | 00:01:13.355 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-05-29 00:01:13.355780 | orchestrator | 00:01:13.355 STDOUT terraform:  } 2025-05-29 00:01:13.355794 | orchestrator | 00:01:13.355 STDOUT terraform:  + allowed_address_pairs { 2025-05-29 00:01:13.355850 | orchestrator | 00:01:13.355 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-05-29 00:01:13.355863 | orchestrator | 00:01:13.355 STDOUT terraform:  } 2025-05-29 00:01:13.355877 | orchestrator | 00:01:13.355 STDOUT terraform:  + allowed_address_pairs { 2025-05-29 00:01:13.355933 | orchestrator | 00:01:13.355 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-05-29 00:01:13.355946 | orchestrator | 00:01:13.355 STDOUT terraform:  } 2025-05-29 00:01:13.355960 | orchestrator | 00:01:13.355 STDOUT terraform:  + binding (known after apply) 2025-05-29 00:01:13.355972 | orchestrator | 00:01:13.355 STDOUT terraform:  + fixed_ip { 2025-05-29 00:01:13.355985 | orchestrator | 00:01:13.355 STDOUT terraform:  + ip_address = "192.168.16.11" 2025-05-29 00:01:13.356042 | orchestrator | 00:01:13.355 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-29 00:01:13.356055 | orchestrator | 00:01:13.356 STDOUT terraform:  } 2025-05-29 00:01:13.356110 | orchestrator | 00:01:13.356 STDOUT terraform:  } 2025-05-29 00:01:13.356125 | orchestrator | 00:01:13.356 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[2] will be created 2025-05-29 00:01:13.356193 | orchestrator | 00:01:13.356 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-05-29 00:01:13.356236 | orchestrator | 00:01:13.356 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-29 00:01:13.356279 | orchestrator | 00:01:13.356 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-05-29 00:01:13.356331 | orchestrator | 00:01:13.356 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-05-29 00:01:13.356347 | orchestrator | 00:01:13.356 STDOUT terraform:  + all_tags = (known after apply) 2025-05-29 00:01:13.356408 | orchestrator | 00:01:13.356 STDOUT terraform:  + device_id = (known after apply) 2025-05-29 00:01:13.356451 | orchestrator | 00:01:13.356 STDOUT terraform:  + device_owner = (known after apply) 2025-05-29 00:01:13.356493 | orchestrator | 00:01:13.356 STDOUT terraform:  + dns_assignment = (known after apply) 2025-05-29 00:01:13.356535 | orchestrator | 00:01:13.356 STDOUT terraform:  + dns_name = (known after apply) 2025-05-29 00:01:13.356582 | orchestrator | 00:01:13.356 STDOUT terraform:  + id = (known after apply) 2025-05-29 00:01:13.356598 | orchestrator | 00:01:13.356 STDOUT terraform:  + mac_address = (known after apply) 2025-05-29 00:01:13.356672 | orchestrator | 00:01:13.356 STDOUT terraform:  + network_id = (known after apply) 2025-05-29 00:01:13.356705 | orchestrator | 00:01:13.356 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-05-29 00:01:13.356729 | orchestrator | 00:01:13.356 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-05-29 00:01:13.356795 | orchestrator | 00:01:13.356 STDOUT terraform:  + region = (known after apply) 2025-05-29 00:01:13.356821 | orchestrator | 00:01:13.356 STDOUT terraform:  + security_group_ids = (known after apply) 2025-05-29 00:01:13.356868 | orchestrator | 00:01:13.356 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-29 00:01:13.356884 | orchestrator | 00:01:13.356 STDOUT terraform:  + allowed_address_pairs { 2025-05-29 00:01:13.356924 | orchestrator | 00:01:13.356 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-05-29 00:01:13.356937 | orchestrator | 00:01:13.356 STDOUT terraform:  } 2025-05-29 00:01:13.356950 | orchestrator | 00:01:13.356 STDOUT terraform:  + allowed_address_pairs { 2025-05-29 00:01:13.356990 | orchestrator | 00:01:13.356 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-05-29 00:01:13.357002 | orchestrator | 00:01:13.356 STDOUT terraform:  } 2025-05-29 00:01:13.357016 | orchestrator | 00:01:13.356 STDOUT terraform:  + allowed_address_pairs { 2025-05-29 00:01:13.357056 | orchestrator | 00:01:13.357 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-05-29 00:01:13.357085 | orchestrator | 00:01:13.357 STDOUT terraform:  } 2025-05-29 00:01:13.357136 | orchestrator | 00:01:13.357 STDOUT terraform:  + allowed_address_pairs { 2025-05-29 00:01:13.357152 | orchestrator | 00:01:13.357 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-05-29 00:01:13.357174 | orchestrator | 00:01:13.357 STDOUT terraform:  } 2025-05-29 00:01:13.357188 | orchestrator | 00:01:13.357 STDOUT terraform:  + binding (known after apply) 2025-05-29 00:01:13.357201 | orchestrator | 00:01:13.357 STDOUT terraform:  + fixed_ip { 2025-05-29 00:01:13.357244 | orchestrator | 00:01:13.357 STDOUT terraform:  + ip_address = "192.168.16.12" 2025-05-29 00:01:13.357260 | orchestrator | 00:01:13.357 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-29 00:01:13.357273 | orchestrator | 00:01:13.357 STDOUT terraform:  } 2025-05-29 00:01:13.357287 | orchestrator | 00:01:13.357 STDOUT terraform:  } 2025-05-29 00:01:13.357392 | orchestrator | 00:01:13.357 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[3] will be created 2025-05-29 00:01:13.357411 | orchestrator | 00:01:13.357 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-05-29 00:01:13.357452 | orchestrator | 00:01:13.357 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-29 00:01:13.357494 | orchestrator | 00:01:13.357 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-05-29 00:01:13.357546 | orchestrator | 00:01:13.357 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-05-29 00:01:13.357562 | orchestrator | 00:01:13.357 STDOUT terraform:  + all_tags = (known after apply) 2025-05-29 00:01:13.357621 | orchestrator | 00:01:13.357 STDOUT terraform:  + device_id = (known after apply) 2025-05-29 00:01:13.357664 | orchestrator | 00:01:13.357 STDOUT terraform:  + device_owner = (known after apply) 2025-05-29 00:01:13.357715 | orchestrator | 00:01:13.357 STDOUT terraform:  + dns_assignment = (known after apply) 2025-05-29 00:01:13.357731 | orchestrator | 00:01:13.357 STDOUT terraform:  + dns_name = (known after apply) 2025-05-29 00:01:13.357790 | orchestrator | 00:01:13.357 STDOUT terraform:  + id = (known after apply) 2025-05-29 00:01:13.357833 | orchestrator | 00:01:13.357 STDOUT terraform:  + mac_address = (known after apply) 2025-05-29 00:01:13.357885 | orchestrator | 00:01:13.357 STDOUT terraform:  + network_id = (known after apply) 2025-05-29 00:01:13.357900 | orchestrator | 00:01:13.357 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-05-29 00:01:13.357954 | orchestrator | 00:01:13.357 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-05-29 00:01:13.357999 | orchestrator | 00:01:13.357 STDOUT terraform:  + region = (known after apply) 2025-05-29 00:01:13.358078 | orchestrator | 00:01:13.357 STDOUT terraform:  + security_group_ids = (known after apply) 2025-05-29 00:01:13.358129 | orchestrator | 00:01:13.358 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-29 00:01:13.358142 | orchestrator | 00:01:13.358 STDOUT terraform:  + allowed_address_pairs { 2025-05-29 00:01:13.358188 | orchestrator | 00:01:13.358 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-05-29 00:01:13.358198 | orchestrator | 00:01:13.358 STDOUT terraform:  } 2025-05-29 00:01:13.358210 | orchestrator | 00:01:13.358 STDOUT terraform:  + allowed_address_pairs { 2025-05-29 00:01:13.358257 | orchestrator | 00:01:13.358 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-05-29 00:01:13.358275 | orchestrator | 00:01:13.358 STDOUT terraform:  } 2025-05-29 00:01:13.358286 | orchestrator | 00:01:13.358 STDOUT terraform:  + allowed_address_pairs { 2025-05-29 00:01:13.358322 | orchestrator | 00:01:13.358 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-05-29 00:01:13.358332 | orchestrator | 00:01:13.358 STDOUT terraform:  } 2025-05-29 00:01:13.358343 | orchestrator | 00:01:13.358 STDOUT terraform:  + allowed_address_pairs { 2025-05-29 00:01:13.358387 | orchestrator | 00:01:13.358 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-05-29 00:01:13.358398 | orchestrator | 00:01:13.358 STDOUT terraform:  } 2025-05-29 00:01:13.358409 | orchestrator | 00:01:13.358 STDOUT terraform:  + binding (known after apply) 2025-05-29 00:01:13.358451 | orchestrator | 00:01:13.358 STDOUT terraform:  + fixed_ip { 2025-05-29 00:01:13.358464 | orchestrator | 00:01:13.358 STDOUT terraform:  + ip_address = "192.168.16.13" 2025-05-29 00:01:13.358508 | orchestrator | 00:01:13.358 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-29 00:01:13.358518 | orchestrator | 00:01:13.358 STDOUT terraform:  } 2025-05-29 00:01:13.358529 | orchestrator | 00:01:13.358 STDOUT terraform:  } 2025-05-29 00:01:13.358585 | orchestrator | 00:01:13.358 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[4] will be created 2025-05-29 00:01:13.358649 | orchestrator | 00:01:13.358 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-05-29 00:01:13.358692 | orchestrator | 00:01:13.358 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-29 00:01:13.358737 | orchestrator | 00:01:13.358 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-05-29 00:01:13.358782 | orchestrator | 00:01:13.358 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-05-29 00:01:13.358832 | orchestrator | 00:01:13.358 STDOUT terraform:  + all_tags = (known after apply) 2025-05-29 00:01:13.358882 | orchestrator | 00:01:13.358 STDOUT terraform:  + device_id = (known after apply) 2025-05-29 00:01:13.358895 | orchestrator | 00:01:13.358 STDOUT terraform:  + device_owner = (known after apply) 2025-05-29 00:01:13.358954 | orchestrator | 00:01:13.358 STDOUT terraform:  + dns_assignment = (known after apply) 2025-05-29 00:01:13.358997 | orchestrator | 00:01:13.358 STDOUT terraform:  + dns_name = (known after apply) 2025-05-29 00:01:13.359042 | orchestrator | 00:01:13.358 STDOUT terraform:  + id = (known after apply) 2025-05-29 00:01:13.359104 | orchestrator | 00:01:13.359 STDOUT terraform:  + mac_address = (known after apply) 2025-05-29 00:01:13.359140 | orchestrator | 00:01:13.359 STDOUT terraform:  + network_id = (known after apply) 2025-05-29 00:01:13.359184 | orchestrator | 00:01:13.359 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-05-29 00:01:13.359231 | orchestrator | 00:01:13.359 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-05-29 00:01:13.359275 | orchestrator | 00:01:13.359 STDOUT terraform:  + region = (known after apply) 2025-05-29 00:01:13.359322 | orchestrator | 00:01:13.359 STDOUT terraform:  + security_group_ids = (known after apply) 2025-05-29 00:01:13.359342 | orchestrator | 00:01:13.359 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-29 00:01:13.359377 | orchestrator | 00:01:13.359 STDOUT terraform:  + allowed_address_pairs { 2025-05-29 00:01:13.359421 | orchestrator | 00:01:13.359 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-05-29 00:01:13.359431 | orchestrator | 00:01:13.359 STDOUT terraform:  } 2025-05-29 00:01:13.359443 | orchestrator | 00:01:13.359 STDOUT terraform:  + allowed_address_pairs { 2025-05-29 00:01:13.359477 | orchestrator | 00:01:13.359 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-05-29 00:01:13.359487 | orchestrator | 00:01:13.359 STDOUT terraform:  } 2025-05-29 00:01:13.359498 | orchestrator | 00:01:13.359 STDOUT terraform:  + allowed_address_pairs { 2025-05-29 00:01:13.359541 | orchestrator | 00:01:13.359 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-05-29 00:01:13.359552 | orchestrator | 00:01:13.359 STDOUT terraform:  } 2025-05-29 00:01:13.359563 | orchestrator | 00:01:13.359 STDOUT terraform:  + allowed_address_pairs { 2025-05-29 00:01:13.359607 | orchestrator | 00:01:13.359 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-05-29 00:01:13.359618 | orchestrator | 00:01:13.359 STDOUT terraform:  } 2025-05-29 00:01:13.359629 | orchestrator | 00:01:13.359 STDOUT terraform:  + binding (known after apply) 2025-05-29 00:01:13.359640 | orchestrator | 00:01:13.359 STDOUT terraform:  + fixed_ip { 2025-05-29 00:01:13.359685 | orchestrator | 00:01:13.359 STDOUT terraform:  + ip_address = "192.168.16.14" 2025-05-29 00:01:13.359699 | orchestrator | 00:01:13.359 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-29 00:01:13.359741 | orchestrator | 00:01:13.359 STDOUT terraform:  } 2025-05-29 00:01:13.359751 | orchestrator | 00:01:13.359 STDOUT terraform:  } 2025-05-29 00:01:13.359811 | orchestrator | 00:01:13.359 STDOUT terraform:  # openstack_networking_port_v2.node_port_management[5] will be created 2025-05-29 00:01:13.359865 | orchestrator | 00:01:13.359 STDOUT terraform:  + resource "openstack_networking_port_v2" "node_port_management" { 2025-05-29 00:01:13.359909 | orchestrator | 00:01:13.359 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-29 00:01:13.359953 | orchestrator | 00:01:13.359 STDOUT terraform:  + all_fixed_ips = (known after apply) 2025-05-29 00:01:13.359997 | orchestrator | 00:01:13.359 STDOUT terraform:  + all_security_group_ids = (known after apply) 2025-05-29 00:01:13.360042 | orchestrator | 00:01:13.359 STDOUT terraform:  + all_tags = (known after apply) 2025-05-29 00:01:13.360111 | orchestrator | 00:01:13.360 STDOUT terraform:  + device_id = (known after apply) 2025-05-29 00:01:13.360158 | orchestrator | 00:01:13.360 STDOUT terraform:  + device_owner = (known after apply) 2025-05-29 00:01:13.360201 | orchestrator | 00:01:13.360 STDOUT terraform:  + dns_assignment = (known after apply) 2025-05-29 00:01:13.360245 | orchestrator | 00:01:13.360 STDOUT terraform:  + dns_name = (known after apply) 2025-05-29 00:01:13.360281 | orchestrator | 00:01:13.360 STDOUT terraform:  + id = (known after apply) 2025-05-29 00:01:13.360324 | orchestrator | 00:01:13.360 STDOUT terraform:  + mac_address = (known after apply) 2025-05-29 00:01:13.360422 | orchestrator | 00:01:13.360 STDOUT terraform:  + network_id = (known after apply) 2025-05-29 00:01:13.360437 | orchestrator | 00:01:13.360 STDOUT terraform:  + port_security_enabled = (known after apply) 2025-05-29 00:01:13.360479 | orchestrator | 00:01:13.360 STDOUT terraform:  + qos_policy_id = (known after apply) 2025-05-29 00:01:13.360514 | orchestrator | 00:01:13.360 STDOUT terraform:  + region = (known after apply) 2025-05-29 00:01:13.360549 | orchestrator | 00:01:13.360 STDOUT terraform:  + security_group_ids = (known after apply) 2025-05-29 00:01:13.360584 | orchestrator | 00:01:13.360 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-29 00:01:13.360600 | orchestrator | 00:01:13.360 STDOUT terraform:  + allowed_address_pairs { 2025-05-29 00:01:13.360636 | orchestrator | 00:01:13.360 STDOUT terraform:  + ip_address = "192.168.112.0/20" 2025-05-29 00:01:13.360646 | orchestrator | 00:01:13.360 STDOUT terraform:  } 2025-05-29 00:01:13.360657 | orchestrator | 00:01:13.360 STDOUT terraform:  + allowed_address_pairs { 2025-05-29 00:01:13.360691 | orchestrator | 00:01:13.360 STDOUT terraform:  + ip_address = "192.168.16.254/20" 2025-05-29 00:01:13.360701 | orchestrator | 00:01:13.360 STDOUT terraform:  } 2025-05-29 00:01:13.360712 | orchestrator | 00:01:13.360 STDOUT terraform:  + allowed_address_pairs { 2025-05-29 00:01:13.360746 | orchestrator | 00:01:13.360 STDOUT terraform:  + ip_address = "192.168.16.8/20" 2025-05-29 00:01:13.360756 | orchestrator | 00:01:13.360 STDOUT terraform:  } 2025-05-29 00:01:13.360767 | orchestrator | 00:01:13.360 STDOUT terraform:  + allowed_address_pairs { 2025-05-29 00:01:13.360808 | orchestrator | 00:01:13.360 STDOUT terraform:  + ip_address = "192.168.16.9/20" 2025-05-29 00:01:13.360818 | orchestrator | 00:01:13.360 STDOUT terraform:  } 2025-05-29 00:01:13.360830 | orchestrator | 00:01:13.360 STDOUT terraform:  + binding (known after apply) 2025-05-29 00:01:13.360841 | orchestrator | 00:01:13.360 STDOUT terraform:  + fixed_ip { 2025-05-29 00:01:13.360876 | orchestrator | 00:01:13.360 STDOUT terraform:  + ip_address = "192.168.16.15" 2025-05-29 00:01:13.360889 | orchestrator | 00:01:13.360 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-29 00:01:13.360900 | orchestrator | 00:01:13.360 STDOUT terraform:  } 2025-05-29 00:01:13.360911 | orchestrator | 00:01:13.360 STDOUT terraform:  } 2025-05-29 00:01:13.361154 | orchestrator | 00:01:13.360 STDOUT terraform:  # openstack_networking_router_interface_v2.router_interface will be created 2025-05-29 00:01:13.361223 | orchestrator | 00:01:13.360 STDOUT terraform:  + resource "openstack_networking_router_interface_v2" "router_interface" { 2025-05-29 00:01:13.361238 | orchestrator | 00:01:13.361 STDOUT terraform:  + force_destroy = false 2025-05-29 00:01:13.361253 | orchestrator | 00:01:13.361 STDOUT terraform:  + id = (known after apply) 2025-05-29 00:01:13.361277 | orchestrator | 00:01:13.361 STDOUT terraform:  + port_id = (known after apply) 2025-05-29 00:01:13.361290 | orchestrator | 00:01:13.361 STDOUT terraform:  + region = (known after apply) 2025-05-29 00:01:13.361303 | orchestrator | 00:01:13.361 STDOUT terraform:  + router_id = (known after apply) 2025-05-29 00:01:13.361341 | orchestrator | 00:01:13.361 STDOUT terraform:  + subnet_id = (known after apply) 2025-05-29 00:01:13.361355 | orchestrator | 00:01:13.361 STDOUT terraform:  } 2025-05-29 00:01:13.361368 | orchestrator | 00:01:13.361 STDOUT terraform:  # openstack_networking_router_v2.router will be created 2025-05-29 00:01:13.361384 | orchestrator | 00:01:13.361 STDOUT terraform:  + resource "openstack_networking_router_v2" "router" { 2025-05-29 00:01:13.361397 | orchestrator | 00:01:13.361 STDOUT terraform:  + admin_state_up = (known after apply) 2025-05-29 00:01:13.361409 | orchestrator | 00:01:13.361 STDOUT terraform:  + all_tags = (known after apply) 2025-05-29 00:01:13.361421 | orchestrator | 00:01:13.361 STDOUT terraform:  + availability_zone_hints = [ 2025-05-29 00:01:13.361445 | orchestrator | 00:01:13.361 STDOUT terraform:  + "nova", 2025-05-29 00:01:13.361465 | orchestrator | 00:01:13.361 STDOUT terraform:  ] 2025-05-29 00:01:13.361495 | orchestrator | 00:01:13.361 STDOUT terraform:  + distributed = (known after apply) 2025-05-29 00:01:13.361521 | orchestrator | 00:01:13.361 STDOUT terraform:  + enable_snat = (known after apply) 2025-05-29 00:01:13.361607 | orchestrator | 00:01:13.361 STDOUT terraform:  + external_network_id = "e6be7364-bfd8-4de7-8120-8f41c69a139a" 2025-05-29 00:01:13.361630 | orchestrator | 00:01:13.361 STDOUT terraform:  + id = (known after apply) 2025-05-29 00:01:13.361645 | orchestrator | 00:01:13.361 STDOUT terraform:  + name = "testbed" 2025-05-29 00:01:13.361692 | orchestrator | 00:01:13.361 STDOUT terraform:  + region = (known after apply) 2025-05-29 00:01:13.361710 | orchestrator | 00:01:13.361 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-29 00:01:13.361767 | orchestrator | 00:01:13.361 STDOUT terraform:  + external_fixed_ip (known after apply) 2025-05-29 00:01:13.361781 | orchestrator | 00:01:13.361 STDOUT terraform:  } 2025-05-29 00:01:13.361838 | orchestrator | 00:01:13.361 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule1 will be created 2025-05-29 00:01:13.361896 | orchestrator | 00:01:13.361 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule1" { 2025-05-29 00:01:13.361910 | orchestrator | 00:01:13.361 STDOUT terraform:  + description = "ssh" 2025-05-29 00:01:13.361925 | orchestrator | 00:01:13.361 STDOUT terraform:  + direction = "ingress" 2025-05-29 00:01:13.361941 | orchestrator | 00:01:13.361 STDOUT terraform:  + ethertype = "IPv4" 2025-05-29 00:01:13.361987 | orchestrator | 00:01:13.361 STDOUT terraform:  + id = (known after apply) 2025-05-29 00:01:13.362001 | orchestrator | 00:01:13.361 STDOUT terraform:  + port_range_max = 22 2025-05-29 00:01:13.362043 | orchestrator | 00:01:13.361 STDOUT terraform:  + port_range_min = 22 2025-05-29 00:01:13.362058 | orchestrator | 00:01:13.362 STDOUT terraform:  + protocol = "tcp" 2025-05-29 00:01:13.362100 | orchestrator | 00:01:13.362 STDOUT terraform:  + region = (known after apply) 2025-05-29 00:01:13.362147 | orchestrator | 00:01:13.362 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-29 00:01:13.362165 | orchestrator | 00:01:13.362 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-05-29 00:01:13.362190 | orchestrator | 00:01:13.362 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-29 00:01:13.362202 | orchestrator | 00:01:13.362 STDOUT terraform:  + tenant 2025-05-29 00:01:13.362286 | orchestrator | 00:01:13.362 STDOUT terraform: _id = (known after apply) 2025-05-29 00:01:13.362308 | orchestrator | 00:01:13.362 STDOUT terraform:  } 2025-05-29 00:01:13.362331 | orchestrator | 00:01:13.362 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule2 will be created 2025-05-29 00:01:13.362395 | orchestrator | 00:01:13.362 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule2" { 2025-05-29 00:01:13.362420 | orchestrator | 00:01:13.362 STDOUT terraform:  + description = "wireguard" 2025-05-29 00:01:13.362437 | orchestrator | 00:01:13.362 STDOUT terraform:  + direction = "ingress" 2025-05-29 00:01:13.362449 | orchestrator | 00:01:13.362 STDOUT terraform:  + ethertype = "IPv4" 2025-05-29 00:01:13.362464 | orchestrator | 00:01:13.362 STDOUT terraform:  + id = (known after apply) 2025-05-29 00:01:13.362479 | orchestrator | 00:01:13.362 STDOUT terraform:  + port_range_max = 51820 2025-05-29 00:01:13.362494 | orchestrator | 00:01:13.362 STDOUT terraform:  + port_range_min = 51820 2025-05-29 00:01:13.362510 | orchestrator | 00:01:13.362 STDOUT terraform:  + protocol = "udp" 2025-05-29 00:01:13.362567 | orchestrator | 00:01:13.362 STDOUT terraform:  + region = (known after apply) 2025-05-29 00:01:13.362675 | orchestrator | 00:01:13.362 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-29 00:01:13.362693 | orchestrator | 00:01:13.362 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-05-29 00:01:13.362705 | orchestrator | 00:01:13.362 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-29 00:01:13.362729 | orchestrator | 00:01:13.362 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-29 00:01:13.362741 | orchestrator | 00:01:13.362 STDOUT terraform:  } 2025-05-29 00:01:13.362757 | orchestrator | 00:01:13.362 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule3 will be created 2025-05-29 00:01:13.362770 | orchestrator | 00:01:13.362 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule3" { 2025-05-29 00:01:13.362785 | orchestrator | 00:01:13.362 STDOUT terraform:  + direction = "ingress" 2025-05-29 00:01:13.362800 | orchestrator | 00:01:13.362 STDOUT terraform:  + ethertype = "IPv4" 2025-05-29 00:01:13.362856 | orchestrator | 00:01:13.362 STDOUT terraform:  + id = (known after apply) 2025-05-29 00:01:13.362870 | orchestrator | 00:01:13.362 STDOUT terraform:  + protocol = "tcp" 2025-05-29 00:01:13.362885 | orchestrator | 00:01:13.362 STDOUT terraform:  + region = (known after apply) 2025-05-29 00:01:13.362901 | orchestrator | 00:01:13.362 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-29 00:01:13.362948 | orchestrator | 00:01:13.362 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-05-29 00:01:13.362965 | orchestrator | 00:01:13.362 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-29 00:01:13.362980 | orchestrator | 00:01:13.362 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-29 00:01:13.363006 | orchestrator | 00:01:13.362 STDOUT terraform:  } 2025-05-29 00:01:13.363054 | orchestrator | 00:01:13.362 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule4 will be created 2025-05-29 00:01:13.363104 | orchestrator | 00:01:13.363 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule4" { 2025-05-29 00:01:13.363121 | orchestrator | 00:01:13.363 STDOUT terraform:  + direction = "ingress" 2025-05-29 00:01:13.363135 | orchestrator | 00:01:13.363 STDOUT terraform:  + ethertype = "IPv4" 2025-05-29 00:01:13.363181 | orchestrator | 00:01:13.363 STDOUT terraform:  + id = (known after apply) 2025-05-29 00:01:13.363195 | orchestrator | 00:01:13.363 STDOUT terraform:  + protocol = "udp" 2025-05-29 00:01:13.363210 | orchestrator | 00:01:13.363 STDOUT terraform:  + region = (known after apply) 2025-05-29 00:01:13.363254 | orchestrator | 00:01:13.363 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-29 00:01:13.363271 | orchestrator | 00:01:13.363 STDOUT terraform:  + remote_ip_prefix = "192.168.16.0/20" 2025-05-29 00:01:13.363326 | orchestrator | 00:01:13.363 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-29 00:01:13.363340 | orchestrator | 00:01:13.363 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-29 00:01:13.363355 | orchestrator | 00:01:13.363 STDOUT terraform:  } 2025-05-29 00:01:13.363399 | orchestrator | 00:01:13.363 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_management_rule5 will be created 2025-05-29 00:01:13.363457 | orchestrator | 00:01:13.363 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_management_rule5" { 2025-05-29 00:01:13.363471 | orchestrator | 00:01:13.363 STDOUT terraform:  + direction = "ingress" 2025-05-29 00:01:13.363486 | orchestrator | 00:01:13.363 STDOUT terraform:  + ethertype = "IPv4" 2025-05-29 00:01:13.363501 | orchestrator | 00:01:13.363 STDOUT terraform:  + id = (known after apply) 2025-05-29 00:01:13.363516 | orchestrator | 00:01:13.363 STDOUT terraform:  + protocol = "icmp" 2025-05-29 00:01:13.363570 | orchestrator | 00:01:13.363 STDOUT terraform:  + region = (known after apply) 2025-05-29 00:01:13.363588 | orchestrator | 00:01:13.363 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-29 00:01:13.363603 | orchestrator | 00:01:13.363 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-05-29 00:01:13.363657 | orchestrator | 00:01:13.363 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-29 00:01:13.363671 | orchestrator | 00:01:13.363 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-29 00:01:13.363686 | orchestrator | 00:01:13.363 STDOUT terraform:  } 2025-05-29 00:01:13.363730 | orchestrator | 00:01:13.363 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule1 will be created 2025-05-29 00:01:13.363775 | orchestrator | 00:01:13.363 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule1" { 2025-05-29 00:01:13.363792 | orchestrator | 00:01:13.363 STDOUT terraform:  + direction = "ingress" 2025-05-29 00:01:13.363807 | orchestrator | 00:01:13.363 STDOUT terraform:  + ethertype = "IPv4" 2025-05-29 00:01:13.363830 | orchestrator | 00:01:13.363 STDOUT terraform:  + id = (known after apply) 2025-05-29 00:01:13.363851 | orchestrator | 00:01:13.363 STDOUT terraform:  + protocol = "tcp" 2025-05-29 00:01:13.363866 | orchestrator | 00:01:13.363 STDOUT terraform:  + region = (known after apply) 2025-05-29 00:01:13.363913 | orchestrator | 00:01:13.363 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-29 00:01:13.363927 | orchestrator | 00:01:13.363 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-05-29 00:01:13.363942 | orchestrator | 00:01:13.363 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-29 00:01:13.363985 | orchestrator | 00:01:13.363 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-29 00:01:13.363999 | orchestrator | 00:01:13.363 STDOUT terraform:  } 2025-05-29 00:01:13.364042 | orchestrator | 00:01:13.363 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule2 will be created 2025-05-29 00:01:13.364147 | orchestrator | 00:01:13.364 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule2" { 2025-05-29 00:01:13.364164 | orchestrator | 00:01:13.364 STDOUT terraform:  + direction = "ingress" 2025-05-29 00:01:13.364176 | orchestrator | 00:01:13.364 STDOUT terraform:  + ethertype = "IPv4" 2025-05-29 00:01:13.364188 | orchestrator | 00:01:13.364 STDOUT terraform:  + id = (known after apply) 2025-05-29 00:01:13.364203 | orchestrator | 00:01:13.364 STDOUT terraform:  + protocol = "udp" 2025-05-29 00:01:13.364215 | orchestrator | 00:01:13.364 STDOUT terraform:  + region = (known after apply) 2025-05-29 00:01:13.364228 | orchestrator | 00:01:13.364 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-29 00:01:13.364242 | orchestrator | 00:01:13.364 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-05-29 00:01:13.364282 | orchestrator | 00:01:13.364 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-29 00:01:13.364297 | orchestrator | 00:01:13.364 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-29 00:01:13.364308 | orchestrator | 00:01:13.364 STDOUT terraform:  } 2025-05-29 00:01:13.364368 | orchestrator | 00:01:13.364 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_node_rule3 will be created 2025-05-29 00:01:13.364419 | orchestrator | 00:01:13.364 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_node_rule3" { 2025-05-29 00:01:13.364434 | orchestrator | 00:01:13.364 STDOUT terraform:  + direction = "ingress" 2025-05-29 00:01:13.364447 | orchestrator | 00:01:13.364 STDOUT terraform:  + ethertype = "IPv4" 2025-05-29 00:01:13.364497 | orchestrator | 00:01:13.364 STDOUT terraform:  + id = (known after apply) 2025-05-29 00:01:13.364510 | orchestrator | 00:01:13.364 STDOUT terraform:  + protocol = "icmp" 2025-05-29 00:01:13.364523 | orchestrator | 00:01:13.364 STDOUT terraform:  + region = (known after apply) 2025-05-29 00:01:13.364563 | orchestrator | 00:01:13.364 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-29 00:01:13.364575 | orchestrator | 00:01:13.364 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-05-29 00:01:13.364596 | orchestrator | 00:01:13.364 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-29 00:01:13.364648 | orchestrator | 00:01:13.364 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-29 00:01:13.364661 | orchestrator | 00:01:13.364 STDOUT terraform:  } 2025-05-29 00:01:13.364675 | orchestrator | 00:01:13.364 STDOUT terraform:  # openstack_networking_secgroup_rule_v2.security_group_rule_vrrp will be created 2025-05-29 00:01:13.364744 | orchestrator | 00:01:13.364 STDOUT terraform:  + resource "openstack_networking_secgroup_rule_v2" "security_group_rule_vrrp" { 2025-05-29 00:01:13.364757 | orchestrator | 00:01:13.364 STDOUT terraform:  + description = "vrrp" 2025-05-29 00:01:13.364771 | orchestrator | 00:01:13.364 STDOUT terraform:  + direction = "ingress" 2025-05-29 00:01:13.364785 | orchestrator | 00:01:13.364 STDOUT terraform:  + ethertype = "IPv4" 2025-05-29 00:01:13.364824 | orchestrator | 00:01:13.364 STDOUT terraform:  + id = (known after apply) 2025-05-29 00:01:13.364841 | orchestrator | 00:01:13.364 STDOUT terraform:  + protocol = "112" 2025-05-29 00:01:13.364855 | orchestrator | 00:01:13.364 STDOUT terraform:  + region = (known after apply) 2025-05-29 00:01:13.364898 | orchestrator | 00:01:13.364 STDOUT terraform:  + remote_group_id = (known after apply) 2025-05-29 00:01:13.364913 | orchestrator | 00:01:13.364 STDOUT terraform:  + remote_ip_prefix = "0.0.0.0/0" 2025-05-29 00:01:13.364926 | orchestrator | 00:01:13.364 STDOUT terraform:  + security_group_id = (known after apply) 2025-05-29 00:01:13.364967 | orchestrator | 00:01:13.364 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-29 00:01:13.364979 | orchestrator | 00:01:13.364 STDOUT terraform:  } 2025-05-29 00:01:13.365030 | orchestrator | 00:01:13.364 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_management will be created 2025-05-29 00:01:13.365086 | orchestrator | 00:01:13.365 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_management" { 2025-05-29 00:01:13.365102 | orchestrator | 00:01:13.365 STDOUT terraform:  + all_tags = (known after apply) 2025-05-29 00:01:13.365158 | orchestrator | 00:01:13.365 STDOUT terraform:  + description = "management security group" 2025-05-29 00:01:13.365174 | orchestrator | 00:01:13.365 STDOUT terraform:  + id = (known after apply) 2025-05-29 00:01:13.365187 | orchestrator | 00:01:13.365 STDOUT terraform:  + name = "testbed-management" 2025-05-29 00:01:13.365236 | orchestrator | 00:01:13.365 STDOUT terraform:  + region = (known after apply) 2025-05-29 00:01:13.365252 | orchestrator | 00:01:13.365 STDOUT terraform:  + stateful = (known after apply) 2025-05-29 00:01:13.365265 | orchestrator | 00:01:13.365 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-29 00:01:13.365279 | orchestrator | 00:01:13.365 STDOUT terraform:  } 2025-05-29 00:01:13.365352 | orchestrator | 00:01:13.365 STDOUT terraform:  # openstack_networking_secgroup_v2.security_group_node will be created 2025-05-29 00:01:13.365379 | orchestrator | 00:01:13.365 STDOUT terraform:  + resource "openstack_networking_secgroup_v2" "security_group_node" { 2025-05-29 00:01:13.365401 | orchestrator | 00:01:13.365 STDOUT terraform:  + all_tags = (known after apply) 2025-05-29 00:01:13.365423 | orchestrator | 00:01:13.365 STDOUT terraform:  + description = "node security group" 2025-05-29 00:01:13.365458 | orchestrator | 00:01:13.365 STDOUT terraform:  + id = (known after apply) 2025-05-29 00:01:13.365481 | orchestrator | 00:01:13.365 STDOUT terraform:  + name = "testbed-node" 2025-05-29 00:01:13.365503 | orchestrator | 00:01:13.365 STDOUT terraform:  + region = (known after apply) 2025-05-29 00:01:13.365527 | orchestrator | 00:01:13.365 STDOUT terraform:  + stateful = (known after apply) 2025-05-29 00:01:13.365544 | orchestrator | 00:01:13.365 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-29 00:01:13.365557 | orchestrator | 00:01:13.365 STDOUT terraform:  } 2025-05-29 00:01:13.365618 | orchestrator | 00:01:13.365 STDOUT terraform:  # openstack_networking_subnet_v2.subnet_management will be created 2025-05-29 00:01:13.365671 | orchestrator | 00:01:13.365 STDOUT terraform:  + resource "openstack_networking_subnet_v2" "subnet_management" { 2025-05-29 00:01:13.365683 | orchestrator | 00:01:13.365 STDOUT terraform:  + all_tags = (known after apply) 2025-05-29 00:01:13.365697 | orchestrator | 00:01:13.365 STDOUT terraform:  + cidr = "192.168.16.0/20" 2025-05-29 00:01:13.365710 | orchestrator | 00:01:13.365 STDOUT terraform:  + dns_nameservers = [ 2025-05-29 00:01:13.365724 | orchestrator | 00:01:13.365 STDOUT terraform:  + "8.8.8.8", 2025-05-29 00:01:13.365737 | orchestrator | 00:01:13.365 STDOUT terraform:  + "9.9.9.9", 2025-05-29 00:01:13.365751 | orchestrator | 00:01:13.365 STDOUT terraform:  ] 2025-05-29 00:01:13.365764 | orchestrator | 00:01:13.365 STDOUT terraform:  + enable_dhcp = true 2025-05-29 00:01:13.365815 | orchestrator | 00:01:13.365 STDOUT terraform:  + gateway_ip = (known after apply) 2025-05-29 00:01:13.365831 | orchestrator | 00:01:13.365 STDOUT terraform:  + id = (known after apply) 2025-05-29 00:01:13.365844 | orchestrator | 00:01:13.365 STDOUT terraform:  + ip_version = 4 2025-05-29 00:01:13.365884 | orchestrator | 00:01:13.365 STDOUT terraform:  + ipv6_address_mode = (known after apply) 2025-05-29 00:01:13.365899 | orchestrator | 00:01:13.365 STDOUT terraform:  + ipv6_ra_mode = (known after apply) 2025-05-29 00:01:13.365948 | orchestrator | 00:01:13.365 STDOUT terraform:  + name = "subnet-testbed-management" 2025-05-29 00:01:13.365964 | orchestrator | 00:01:13.365 STDOUT terraform:  + network_id = (known after apply) 2025-05-29 00:01:13.365978 | orchestrator | 00:01:13.365 STDOUT terraform:  + no_gateway = false 2025-05-29 00:01:13.366040 | orchestrator | 00:01:13.365 STDOUT terraform:  + region = (known after apply) 2025-05-29 00:01:13.366058 | orchestrator | 00:01:13.365 STDOUT terraform:  + service_types = (known after apply) 2025-05-29 00:01:13.366089 | orchestrator | 00:01:13.366 STDOUT terraform:  + tenant_id = (known after apply) 2025-05-29 00:01:13.366103 | orchestrator | 00:01:13.366 STDOUT terraform:  + allocation_pool { 2025-05-29 00:01:13.366155 | orchestrator | 00:01:13.366 STDOUT terraform:  + end = "192.168.31.250" 2025-05-29 00:01:13.366167 | orchestrator | 00:01:13.366 STDOUT terraform:  + start = "192.168.31.200" 2025-05-29 00:01:13.366181 | orchestrator | 00:01:13.366 STDOUT terraform:  } 2025-05-29 00:01:13.366192 | orchestrator | 00:01:13.366 STDOUT terraform:  } 2025-05-29 00:01:13.366212 | orchestrator | 00:01:13.366 STDOUT terraform:  # terraform_data.image will be created 2025-05-29 00:01:13.366258 | orchestrator | 00:01:13.366 STDOUT terraform:  + resource "terraform_data" "image" { 2025-05-29 00:01:13.366273 | orchestrator | 00:01:13.366 STDOUT terraform:  + id = (known after apply) 2025-05-29 00:01:13.366284 | orchestrator | 00:01:13.366 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-05-29 00:01:13.366294 | orchestrator | 00:01:13.366 STDOUT terraform:  + output = (known after apply) 2025-05-29 00:01:13.366308 | orchestrator | 00:01:13.366 STDOUT terraform:  } 2025-05-29 00:01:13.366319 | orchestrator | 00:01:13.366 STDOUT terraform:  # terraform_data.image_node will be created 2025-05-29 00:01:13.366332 | orchestrator | 00:01:13.366 STDOUT terraform:  + resource "terraform_data" "image_node" { 2025-05-29 00:01:13.366356 | orchestrator | 00:01:13.366 STDOUT terraform:  + id = (known after apply) 2025-05-29 00:01:13.366370 | orchestrator | 00:01:13.366 STDOUT terraform:  + input = "Ubuntu 24.04" 2025-05-29 00:01:13.366412 | orchestrator | 00:01:13.366 STDOUT terraform:  + output = (known after apply) 2025-05-29 00:01:13.366424 | orchestrator | 00:01:13.366 STDOUT terraform:  } 2025-05-29 00:01:13.366438 | orchestrator | 00:01:13.366 STDOUT terraform: Plan: 64 to add, 0 to change, 0 to destroy. 2025-05-29 00:01:13.366449 | orchestrator | 00:01:13.366 STDOUT terraform: Changes to Outputs: 2025-05-29 00:01:13.366463 | orchestrator | 00:01:13.366 STDOUT terraform:  + manager_address = (sensitive value) 2025-05-29 00:01:13.366476 | orchestrator | 00:01:13.366 STDOUT terraform:  + private_key = (sensitive value) 2025-05-29 00:01:13.586980 | orchestrator | 00:01:13.586 STDOUT terraform: terraform_data.image: Creating... 2025-05-29 00:01:13.587114 | orchestrator | 00:01:13.586 STDOUT terraform: terraform_data.image: Creation complete after 0s [id=3e68f37b-1338-4092-cabc-e41670433be0] 2025-05-29 00:01:13.587134 | orchestrator | 00:01:13.586 STDOUT terraform: terraform_data.image_node: Creating... 2025-05-29 00:01:13.587148 | orchestrator | 00:01:13.587 STDOUT terraform: terraform_data.image_node: Creation complete after 0s [id=be030590-9203-025d-5527-f6faafa49162] 2025-05-29 00:01:13.610678 | orchestrator | 00:01:13.610 STDOUT terraform: data.openstack_images_image_v2.image_node: Reading... 2025-05-29 00:01:13.614843 | orchestrator | 00:01:13.614 STDOUT terraform: data.openstack_images_image_v2.image: Reading... 2025-05-29 00:01:13.615144 | orchestrator | 00:01:13.614 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creating... 2025-05-29 00:01:13.616141 | orchestrator | 00:01:13.615 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creating... 2025-05-29 00:01:13.619652 | orchestrator | 00:01:13.619 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creating... 2025-05-29 00:01:13.619746 | orchestrator | 00:01:13.619 STDOUT terraform: openstack_compute_keypair_v2.key: Creating... 2025-05-29 00:01:13.621551 | orchestrator | 00:01:13.619 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creating... 2025-05-29 00:01:13.621627 | orchestrator | 00:01:13.620 STDOUT terraform: openstack_networking_network_v2.net_management: Creating... 2025-05-29 00:01:13.623346 | orchestrator | 00:01:13.623 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creating... 2025-05-29 00:01:13.625351 | orchestrator | 00:01:13.625 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creating... 2025-05-29 00:01:14.067955 | orchestrator | 00:01:14.067 STDOUT terraform: data.openstack_images_image_v2.image_node: Read complete after 0s [id=cd9ae1ce-c4eb-4380-9087-2aa040df6990] 2025-05-29 00:01:14.075888 | orchestrator | 00:01:14.075 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creating... 2025-05-29 00:01:14.078132 | orchestrator | 00:01:14.077 STDOUT terraform: data.openstack_images_image_v2.image: Read complete after 0s [id=cd9ae1ce-c4eb-4380-9087-2aa040df6990] 2025-05-29 00:01:14.086930 | orchestrator | 00:01:14.086 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creating... 2025-05-29 00:01:14.129180 | orchestrator | 00:01:14.128 STDOUT terraform: openstack_compute_keypair_v2.key: Creation complete after 0s [id=testbed] 2025-05-29 00:01:14.146776 | orchestrator | 00:01:14.146 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creating... 2025-05-29 00:01:19.604533 | orchestrator | 00:01:19.604 STDOUT terraform: openstack_networking_network_v2.net_management: Creation complete after 6s [id=7197b975-c449-422e-807d-4969232589e6] 2025-05-29 00:01:19.620015 | orchestrator | 00:01:19.619 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creating... 2025-05-29 00:01:23.615802 | orchestrator | 00:01:23.615 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Still creating... [10s elapsed] 2025-05-29 00:01:23.616829 | orchestrator | 00:01:23.616 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Still creating... [10s elapsed] 2025-05-29 00:01:23.621427 | orchestrator | 00:01:23.621 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Still creating... [10s elapsed] 2025-05-29 00:01:23.621639 | orchestrator | 00:01:23.621 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Still creating... [10s elapsed] 2025-05-29 00:01:23.624834 | orchestrator | 00:01:23.624 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Still creating... [10s elapsed] 2025-05-29 00:01:23.626094 | orchestrator | 00:01:23.625 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Still creating... [10s elapsed] 2025-05-29 00:01:24.076731 | orchestrator | 00:01:24.076 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Still creating... [10s elapsed] 2025-05-29 00:01:24.088099 | orchestrator | 00:01:24.087 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Still creating... [10s elapsed] 2025-05-29 00:01:24.147632 | orchestrator | 00:01:24.147 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Still creating... [10s elapsed] 2025-05-29 00:01:24.196400 | orchestrator | 00:01:24.196 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[4]: Creation complete after 10s [id=f7dbb189-5858-4eca-9499-fceb9ae8f8d2] 2025-05-29 00:01:24.202383 | orchestrator | 00:01:24.201 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[5]: Creation complete after 10s [id=baffed07-1ba6-4c69-bef3-fae49f76e29e] 2025-05-29 00:01:24.209689 | orchestrator | 00:01:24.209 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creating... 2025-05-29 00:01:24.214799 | orchestrator | 00:01:24.214 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creating... 2025-05-29 00:01:24.226321 | orchestrator | 00:01:24.225 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[1]: Creation complete after 10s [id=ab52b3eb-0fd7-41fe-9d4d-bdc516081274] 2025-05-29 00:01:24.232521 | orchestrator | 00:01:24.232 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creating... 2025-05-29 00:01:24.237851 | orchestrator | 00:01:24.237 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[3]: Creation complete after 10s [id=872f8c6a-38b8-4598-af69-d174e2488207] 2025-05-29 00:01:24.241234 | orchestrator | 00:01:24.240 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[6]: Creation complete after 10s [id=81c2fe1f-38cc-49f7-ae7d-3d898626253d] 2025-05-29 00:01:24.244231 | orchestrator | 00:01:24.243 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creating... 2025-05-29 00:01:24.247882 | orchestrator | 00:01:24.247 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creating... 2025-05-29 00:01:24.252767 | orchestrator | 00:01:24.252 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[0]: Creation complete after 10s [id=172ad3b6-4b22-4cdf-a28e-ac5da2182fda] 2025-05-29 00:01:24.269357 | orchestrator | 00:01:24.269 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creating... 2025-05-29 00:01:24.295622 | orchestrator | 00:01:24.295 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[7]: Creation complete after 10s [id=d4d6d7dc-ffab-40f4-8a14-6defed4afc9f] 2025-05-29 00:01:24.312166 | orchestrator | 00:01:24.311 STDOUT terraform: local_file.id_rsa_pub: Creating... 2025-05-29 00:01:24.319938 | orchestrator | 00:01:24.319 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[8]: Creation complete after 10s [id=c045ec7e-dfd2-45aa-a5da-e7ebbe64f976] 2025-05-29 00:01:24.320555 | orchestrator | 00:01:24.320 STDOUT terraform: local_file.id_rsa_pub: Creation complete after 0s [id=b2704d64aa0aa133efa2c809eaedf49275a757d3] 2025-05-29 00:01:24.331201 | orchestrator | 00:01:24.330 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creating... 2025-05-29 00:01:24.332737 | orchestrator | 00:01:24.332 STDOUT terraform: local_sensitive_file.id_rsa: Creating... 2025-05-29 00:01:24.337831 | orchestrator | 00:01:24.337 STDOUT terraform: local_sensitive_file.id_rsa: Creation complete after 0s [id=f8c1b42fa5b7c9fb0e915a4a3f335ee5c9ac691c] 2025-05-29 00:01:24.341047 | orchestrator | 00:01:24.340 STDOUT terraform: openstack_blockstorage_volume_v3.node_volume[2]: Creation complete after 10s [id=6be5e360-5fe4-4176-98be-0e33dc067da2] 2025-05-29 00:01:29.622797 | orchestrator | 00:01:29.622 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Still creating... [10s elapsed] 2025-05-29 00:01:29.940854 | orchestrator | 00:01:29.940 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[0]: Creation complete after 10s [id=597f54b7-c847-4d10-a166-56462537237d] 2025-05-29 00:01:30.300493 | orchestrator | 00:01:30.299 STDOUT terraform: openstack_networking_subnet_v2.subnet_management: Creation complete after 6s [id=9b8ee89b-0c7c-478e-8704-fbfb3d6d93cf] 2025-05-29 00:01:30.311448 | orchestrator | 00:01:30.311 STDOUT terraform: openstack_networking_router_v2.router: Creating... 2025-05-29 00:01:34.210892 | orchestrator | 00:01:34.210 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Still creating... [10s elapsed] 2025-05-29 00:01:34.215984 | orchestrator | 00:01:34.215 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Still creating... [10s elapsed] 2025-05-29 00:01:34.233338 | orchestrator | 00:01:34.233 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Still creating... [10s elapsed] 2025-05-29 00:01:34.244648 | orchestrator | 00:01:34.244 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Still creating... [10s elapsed] 2025-05-29 00:01:34.249073 | orchestrator | 00:01:34.248 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Still creating... [10s elapsed] 2025-05-29 00:01:34.270463 | orchestrator | 00:01:34.270 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Still creating... [10s elapsed] 2025-05-29 00:01:34.550390 | orchestrator | 00:01:34.549 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[2]: Creation complete after 11s [id=43db7570-342a-4138-8a11-552755fecf02] 2025-05-29 00:01:34.591579 | orchestrator | 00:01:34.591 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[1]: Creation complete after 11s [id=dcea969f-ad3c-4191-bf1d-aa670bfd6fcb] 2025-05-29 00:01:34.625345 | orchestrator | 00:01:34.624 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[5]: Creation complete after 11s [id=13985d86-b513-49a7-ae6a-0b62fccaa428] 2025-05-29 00:01:34.645480 | orchestrator | 00:01:34.645 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[3]: Creation complete after 11s [id=3e9d3af7-34b1-4fa5-b4a2-fbeb047fa155] 2025-05-29 00:01:34.648340 | orchestrator | 00:01:34.648 STDOUT terraform: openstack_blockstorage_volume_v3.node_base_volume[4]: Creation complete after 11s [id=c7ad4de3-4f57-4eb1-a9f0-bec4cfb4ae61] 2025-05-29 00:01:34.653008 | orchestrator | 00:01:34.652 STDOUT terraform: openstack_blockstorage_volume_v3.manager_base_volume[0]: Creation complete after 11s [id=6231c96b-5122-4035-ac22-6bae5e335aa0] 2025-05-29 00:01:38.316286 | orchestrator | 00:01:38.315 STDOUT terraform: openstack_networking_router_v2.router: Creation complete after 8s [id=b5bd0319-9560-4e5e-9d8a-6ff49ccf6609] 2025-05-29 00:01:38.322788 | orchestrator | 00:01:38.322 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creating... 2025-05-29 00:01:38.323485 | orchestrator | 00:01:38.323 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creating... 2025-05-29 00:01:38.325132 | orchestrator | 00:01:38.324 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creating... 2025-05-29 00:01:38.511286 | orchestrator | 00:01:38.510 STDOUT terraform: openstack_networking_secgroup_v2.security_group_node: Creation complete after 1s [id=980cdc56-795f-48fe-87f1-750769410f21] 2025-05-29 00:01:38.524886 | orchestrator | 00:01:38.524 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creating... 2025-05-29 00:01:38.526328 | orchestrator | 00:01:38.526 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creating... 2025-05-29 00:01:38.531626 | orchestrator | 00:01:38.531 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creating... 2025-05-29 00:01:38.537961 | orchestrator | 00:01:38.537 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creating... 2025-05-29 00:01:38.538348 | orchestrator | 00:01:38.538 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creating... 2025-05-29 00:01:38.542545 | orchestrator | 00:01:38.542 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creating... 2025-05-29 00:01:38.542914 | orchestrator | 00:01:38.542 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creating... 2025-05-29 00:01:38.546193 | orchestrator | 00:01:38.545 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creating... 2025-05-29 00:01:38.714800 | orchestrator | 00:01:38.714 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule2: Creation complete after 0s [id=9ac21754-6d1f-4057-ad43-8c6f9b482e66] 2025-05-29 00:01:38.732679 | orchestrator | 00:01:38.732 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creating... 2025-05-29 00:01:38.859501 | orchestrator | 00:01:38.858 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_rule_vrrp: Creation complete after 0s [id=0fd54581-c74e-4af7-bdd9-61f1347dc05d] 2025-05-29 00:01:38.876580 | orchestrator | 00:01:38.876 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creating... 2025-05-29 00:01:38.999481 | orchestrator | 00:01:38.999 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule3: Creation complete after 0s [id=e2e4fba2-a0bb-4859-b466-2bfecb41d80e] 2025-05-29 00:01:39.019237 | orchestrator | 00:01:39.018 STDOUT terraform: openstack_networking_secgroup_v2.security_group_management: Creation complete after 1s [id=bfaf1bc2-9afa-4027-b5c4-8e448b920c5e] 2025-05-29 00:01:39.029148 | orchestrator | 00:01:39.028 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creating... 2025-05-29 00:01:39.036549 | orchestrator | 00:01:39.036 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creating... 2025-05-29 00:01:39.195248 | orchestrator | 00:01:39.194 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule3: Creation complete after 0s [id=2a10dedd-42b2-4f1b-bf28-4e690d015aab] 2025-05-29 00:01:39.203322 | orchestrator | 00:01:39.203 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creating... 2025-05-29 00:01:39.246302 | orchestrator | 00:01:39.245 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_node_rule1: Creation complete after 0s [id=4c317c32-01ca-43e7-9806-8f2d3abf260e] 2025-05-29 00:01:39.254982 | orchestrator | 00:01:39.254 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creating... 2025-05-29 00:01:39.404690 | orchestrator | 00:01:39.404 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule5: Creation complete after 0s [id=5fdc70de-5716-41f4-8415-f34a33d40d2f] 2025-05-29 00:01:39.411877 | orchestrator | 00:01:39.411 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creating... 2025-05-29 00:01:39.659279 | orchestrator | 00:01:39.658 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule4: Creation complete after 1s [id=ac31eb83-8c37-4e3f-89ad-689572f9f5f3] 2025-05-29 00:01:39.668139 | orchestrator | 00:01:39.667 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creating... 2025-05-29 00:01:39.810345 | orchestrator | 00:01:39.809 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule2: Creation complete after 1s [id=cfc449a1-75de-4cba-b4da-df0deb9be13d] 2025-05-29 00:01:39.959148 | orchestrator | 00:01:39.958 STDOUT terraform: openstack_networking_secgroup_rule_v2.security_group_management_rule1: Creation complete after 0s [id=e594134c-2e00-4592-a729-336635cd8c14] 2025-05-29 00:01:44.230825 | orchestrator | 00:01:44.230 STDOUT terraform: openstack_networking_port_v2.node_port_management[1]: Creation complete after 5s [id=a34fd6b8-ee44-418a-a909-1c8eb981eead] 2025-05-29 00:01:44.240320 | orchestrator | 00:01:44.239 STDOUT terraform: openstack_networking_port_v2.node_port_management[0]: Creation complete after 5s [id=bd238691-e693-4c6c-ab0a-d027ee2a2414] 2025-05-29 00:01:44.295455 | orchestrator | 00:01:44.295 STDOUT terraform: openstack_networking_port_v2.node_port_management[4]: Creation complete after 5s [id=308345be-e39a-4a67-8073-9b8587dc4ae5] 2025-05-29 00:01:44.514690 | orchestrator | 00:01:44.514 STDOUT terraform: openstack_networking_port_v2.node_port_management[5]: Creation complete after 6s [id=bf61b2bb-739e-4d4f-bf3a-6927f628ed28] 2025-05-29 00:01:44.559725 | orchestrator | 00:01:44.559 STDOUT terraform: openstack_networking_port_v2.node_port_management[2]: Creation complete after 6s [id=298c7446-92a4-42d9-bdba-128f8b4f82d3] 2025-05-29 00:01:44.726418 | orchestrator | 00:01:44.726 STDOUT terraform: openstack_networking_port_v2.manager_port_management: Creation complete after 6s [id=0af7086d-b5bd-4bcb-acf3-9e73baa8a109] 2025-05-29 00:01:44.788184 | orchestrator | 00:01:44.787 STDOUT terraform: openstack_networking_port_v2.node_port_management[3]: Creation complete after 6s [id=a35c2a62-72b2-4e72-b4d8-59b7d7f8b592] 2025-05-29 00:01:46.137575 | orchestrator | 00:01:46.137 STDOUT terraform: openstack_networking_router_interface_v2.router_interface: Creation complete after 8s [id=f07e2af4-834c-4058-ba38-326498d9e483] 2025-05-29 00:01:46.166803 | orchestrator | 00:01:46.166 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creating... 2025-05-29 00:01:46.166880 | orchestrator | 00:01:46.166 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creating... 2025-05-29 00:01:46.172268 | orchestrator | 00:01:46.171 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creating... 2025-05-29 00:01:46.183480 | orchestrator | 00:01:46.183 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creating... 2025-05-29 00:01:46.184648 | orchestrator | 00:01:46.184 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creating... 2025-05-29 00:01:46.198564 | orchestrator | 00:01:46.198 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creating... 2025-05-29 00:01:46.198631 | orchestrator | 00:01:46.198 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creating... 2025-05-29 00:01:52.913012 | orchestrator | 00:01:52.912 STDOUT terraform: openstack_networking_floatingip_v2.manager_floating_ip: Creation complete after 7s [id=8743c76f-1c8a-4b82-874a-5695e164ad9f] 2025-05-29 00:01:52.929902 | orchestrator | 00:01:52.929 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creating... 2025-05-29 00:01:52.930009 | orchestrator | 00:01:52.929 STDOUT terraform: local_file.MANAGER_ADDRESS: Creating... 2025-05-29 00:01:52.931467 | orchestrator | 00:01:52.931 STDOUT terraform: local_file.inventory: Creating... 2025-05-29 00:01:52.937644 | orchestrator | 00:01:52.937 STDOUT terraform: local_file.MANAGER_ADDRESS: Creation complete after 0s [id=df1a796b9be2a3fe3875cb7c3cab99360bec0582] 2025-05-29 00:01:52.938190 | orchestrator | 00:01:52.938 STDOUT terraform: local_file.inventory: Creation complete after 0s [id=05da58db7ff3adc5273efb3dd158e1f6561b55fe] 2025-05-29 00:01:54.123997 | orchestrator | 00:01:54.123 STDOUT terraform: openstack_networking_floatingip_associate_v2.manager_floating_ip_association: Creation complete after 1s [id=8743c76f-1c8a-4b82-874a-5695e164ad9f] 2025-05-29 00:01:56.170366 | orchestrator | 00:01:56.169 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [10s elapsed] 2025-05-29 00:01:56.172288 | orchestrator | 00:01:56.171 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [10s elapsed] 2025-05-29 00:01:56.183998 | orchestrator | 00:01:56.183 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [10s elapsed] 2025-05-29 00:01:56.186102 | orchestrator | 00:01:56.185 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [10s elapsed] 2025-05-29 00:01:56.199665 | orchestrator | 00:01:56.199 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [10s elapsed] 2025-05-29 00:01:56.199867 | orchestrator | 00:01:56.199 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [10s elapsed] 2025-05-29 00:02:06.171811 | orchestrator | 00:02:06.171 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [20s elapsed] 2025-05-29 00:02:06.172810 | orchestrator | 00:02:06.172 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Still creating... [20s elapsed] 2025-05-29 00:02:06.185112 | orchestrator | 00:02:06.184 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Still creating... [20s elapsed] 2025-05-29 00:02:06.186251 | orchestrator | 00:02:06.185 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Still creating... [20s elapsed] 2025-05-29 00:02:06.200697 | orchestrator | 00:02:06.200 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [20s elapsed] 2025-05-29 00:02:06.200792 | orchestrator | 00:02:06.200 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Still creating... [20s elapsed] 2025-05-29 00:02:06.570371 | orchestrator | 00:02:06.570 STDOUT terraform: openstack_compute_instance_v2.node_server[5]: Creation complete after 21s [id=25a190ce-d4ad-42d9-be85-8d656660f569] 2025-05-29 00:02:06.855707 | orchestrator | 00:02:06.855 STDOUT terraform: openstack_compute_instance_v2.node_server[1]: Creation complete after 21s [id=e7dd6783-cafe-47ce-ad4c-c90ceb35de52] 2025-05-29 00:02:07.101392 | orchestrator | 00:02:07.101 STDOUT terraform: openstack_compute_instance_v2.node_server[4]: Creation complete after 21s [id=e1ac5c8c-46af-4ea1-983a-6748bb718f68] 2025-05-29 00:02:07.194853 | orchestrator | 00:02:07.194 STDOUT terraform: openstack_compute_instance_v2.node_server[0]: Creation complete after 21s [id=fc5f843a-51f8-46b5-8ca8-726c060495b3] 2025-05-29 00:02:16.174585 | orchestrator | 00:02:16.174 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Still creating... [30s elapsed] 2025-05-29 00:02:16.201527 | orchestrator | 00:02:16.201 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Still creating... [30s elapsed] 2025-05-29 00:02:16.816422 | orchestrator | 00:02:16.815 STDOUT terraform: openstack_compute_instance_v2.node_server[3]: Creation complete after 31s [id=fd1e0854-bbbd-4213-b346-2a1eb9d978d1] 2025-05-29 00:02:16.986892 | orchestrator | 00:02:16.986 STDOUT terraform: openstack_compute_instance_v2.node_server[2]: Creation complete after 31s [id=5cb188a6-abfe-4992-b0d1-f27f8ce77793] 2025-05-29 00:02:17.018490 | orchestrator | 00:02:17.018 STDOUT terraform: null_resource.node_semaphore: Creating... 2025-05-29 00:02:17.021945 | orchestrator | 00:02:17.021 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creating... 2025-05-29 00:02:17.023916 | orchestrator | 00:02:17.023 STDOUT terraform: null_resource.node_semaphore: Creation complete after 0s [id=1185443651435548571] 2025-05-29 00:02:17.031143 | orchestrator | 00:02:17.030 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creating... 2025-05-29 00:02:17.031211 | orchestrator | 00:02:17.031 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creating... 2025-05-29 00:02:17.031488 | orchestrator | 00:02:17.031 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creating... 2025-05-29 00:02:17.031884 | orchestrator | 00:02:17.031 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creating... 2025-05-29 00:02:17.035444 | orchestrator | 00:02:17.034 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creating... 2025-05-29 00:02:17.038919 | orchestrator | 00:02:17.038 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creating... 2025-05-29 00:02:17.040589 | orchestrator | 00:02:17.040 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creating... 2025-05-29 00:02:17.054461 | orchestrator | 00:02:17.054 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creating... 2025-05-29 00:02:17.064954 | orchestrator | 00:02:17.064 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creating... 2025-05-29 00:02:22.337510 | orchestrator | 00:02:22.337 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[4]: Creation complete after 5s [id=e1ac5c8c-46af-4ea1-983a-6748bb718f68/f7dbb189-5858-4eca-9499-fceb9ae8f8d2] 2025-05-29 00:02:22.364220 | orchestrator | 00:02:22.363 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[8]: Creation complete after 5s [id=25a190ce-d4ad-42d9-be85-8d656660f569/c045ec7e-dfd2-45aa-a5da-e7ebbe64f976] 2025-05-29 00:02:22.368462 | orchestrator | 00:02:22.368 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[3]: Creation complete after 5s [id=fd1e0854-bbbd-4213-b346-2a1eb9d978d1/872f8c6a-38b8-4598-af69-d174e2488207] 2025-05-29 00:02:22.382618 | orchestrator | 00:02:22.382 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[1]: Creation complete after 5s [id=e1ac5c8c-46af-4ea1-983a-6748bb718f68/ab52b3eb-0fd7-41fe-9d4d-bdc516081274] 2025-05-29 00:02:22.404206 | orchestrator | 00:02:22.403 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[2]: Creation complete after 5s [id=25a190ce-d4ad-42d9-be85-8d656660f569/6be5e360-5fe4-4176-98be-0e33dc067da2] 2025-05-29 00:02:22.406765 | orchestrator | 00:02:22.406 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[6]: Creation complete after 5s [id=fd1e0854-bbbd-4213-b346-2a1eb9d978d1/81c2fe1f-38cc-49f7-ae7d-3d898626253d] 2025-05-29 00:02:22.432379 | orchestrator | 00:02:22.432 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[0]: Creation complete after 5s [id=fd1e0854-bbbd-4213-b346-2a1eb9d978d1/172ad3b6-4b22-4cdf-a28e-ac5da2182fda] 2025-05-29 00:02:22.548041 | orchestrator | 00:02:22.547 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[7]: Creation complete after 6s [id=e1ac5c8c-46af-4ea1-983a-6748bb718f68/d4d6d7dc-ffab-40f4-8a14-6defed4afc9f] 2025-05-29 00:02:24.930216 | orchestrator | 00:02:24.929 STDOUT terraform: openstack_compute_volume_attach_v2.node_volume_attachment[5]: Creation complete after 8s [id=25a190ce-d4ad-42d9-be85-8d656660f569/baffed07-1ba6-4c69-bef3-fae49f76e29e] 2025-05-29 00:02:27.066408 | orchestrator | 00:02:27.066 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [10s elapsed] 2025-05-29 00:02:37.067500 | orchestrator | 00:02:37.067 STDOUT terraform: openstack_compute_instance_v2.manager_server: Still creating... [20s elapsed] 2025-05-29 00:02:37.562303 | orchestrator | 00:02:37.561 STDOUT terraform: openstack_compute_instance_v2.manager_server: Creation complete after 21s [id=70440afa-9b3d-47da-b113-45631addb2db] 2025-05-29 00:02:37.586440 | orchestrator | 00:02:37.586 STDOUT terraform: Apply complete! Resources: 64 added, 0 changed, 0 destroyed. 2025-05-29 00:02:37.586516 | orchestrator | 00:02:37.586 STDOUT terraform: Outputs: 2025-05-29 00:02:37.586527 | orchestrator | 00:02:37.586 STDOUT terraform: manager_address = 2025-05-29 00:02:37.586544 | orchestrator | 00:02:37.586 STDOUT terraform: private_key = 2025-05-29 00:02:37.758401 | orchestrator | ok: Runtime: 0:01:35.405265 2025-05-29 00:02:37.798092 | 2025-05-29 00:02:37.798276 | TASK [Fetch manager address] 2025-05-29 00:02:38.244627 | orchestrator | ok 2025-05-29 00:02:38.255209 | 2025-05-29 00:02:38.255360 | TASK [Set manager_host address] 2025-05-29 00:02:38.341347 | orchestrator | ok 2025-05-29 00:02:38.355532 | 2025-05-29 00:02:38.355679 | LOOP [Update ansible collections] 2025-05-29 00:02:39.237591 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2025-05-29 00:02:39.238010 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-05-29 00:02:39.238072 | orchestrator | Starting galaxy collection install process 2025-05-29 00:02:39.238111 | orchestrator | Process install dependency map 2025-05-29 00:02:39.238146 | orchestrator | Starting collection install process 2025-05-29 00:02:39.238178 | orchestrator | Installing 'osism.commons:999.0.0' to '/home/zuul-testbed01/.ansible/collections/ansible_collections/osism/commons' 2025-05-29 00:02:39.238213 | orchestrator | Created collection for osism.commons:999.0.0 at /home/zuul-testbed01/.ansible/collections/ansible_collections/osism/commons 2025-05-29 00:02:39.238252 | orchestrator | osism.commons:999.0.0 was installed successfully 2025-05-29 00:02:39.238331 | orchestrator | ok: Item: commons Runtime: 0:00:00.526661 2025-05-29 00:02:40.061574 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-05-29 00:02:40.062591 | orchestrator | [WARNING]: Collection osism.services does not support Ansible version 2.15.2 2025-05-29 00:02:40.062675 | orchestrator | Starting galaxy collection install process 2025-05-29 00:02:40.062707 | orchestrator | Process install dependency map 2025-05-29 00:02:40.062734 | orchestrator | Starting collection install process 2025-05-29 00:02:40.062759 | orchestrator | Installing 'osism.services:999.0.0' to '/home/zuul-testbed01/.ansible/collections/ansible_collections/osism/services' 2025-05-29 00:02:40.062786 | orchestrator | Created collection for osism.services:999.0.0 at /home/zuul-testbed01/.ansible/collections/ansible_collections/osism/services 2025-05-29 00:02:40.062809 | orchestrator | osism.services:999.0.0 was installed successfully 2025-05-29 00:02:40.062875 | orchestrator | ok: Item: services Runtime: 0:00:00.549245 2025-05-29 00:02:40.090946 | 2025-05-29 00:02:40.091250 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-05-29 00:02:50.664105 | orchestrator | ok 2025-05-29 00:02:50.674964 | 2025-05-29 00:02:50.675116 | TASK [Wait a little longer for the manager so that everything is ready] 2025-05-29 00:03:50.727963 | orchestrator | ok 2025-05-29 00:03:50.738392 | 2025-05-29 00:03:50.738713 | TASK [Fetch manager ssh hostkey] 2025-05-29 00:03:52.331284 | orchestrator | Output suppressed because no_log was given 2025-05-29 00:03:52.348604 | 2025-05-29 00:03:52.348770 | TASK [Get ssh keypair from terraform environment] 2025-05-29 00:03:52.892424 | orchestrator | ok: Runtime: 0:00:00.008086 2025-05-29 00:03:52.909641 | 2025-05-29 00:03:52.909850 | TASK [Point out that the following task takes some time and does not give any output] 2025-05-29 00:03:52.953812 | orchestrator | ok: The task 'Run manager part 0' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete. 2025-05-29 00:03:52.961789 | 2025-05-29 00:03:52.961963 | TASK [Run manager part 0] 2025-05-29 00:03:53.811137 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-05-29 00:03:53.854355 | orchestrator | 2025-05-29 00:03:53.854406 | orchestrator | PLAY [Wait for cloud-init to finish] ******************************************* 2025-05-29 00:03:53.854414 | orchestrator | 2025-05-29 00:03:53.854426 | orchestrator | TASK [Check /var/lib/cloud/instance/boot-finished] ***************************** 2025-05-29 00:03:55.565537 | orchestrator | ok: [testbed-manager] 2025-05-29 00:03:55.565596 | orchestrator | 2025-05-29 00:03:55.565620 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-05-29 00:03:55.565631 | orchestrator | 2025-05-29 00:03:55.565642 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-29 00:03:57.953006 | orchestrator | ok: [testbed-manager] 2025-05-29 00:03:57.953066 | orchestrator | 2025-05-29 00:03:57.953073 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-05-29 00:03:58.547449 | orchestrator | ok: [testbed-manager] 2025-05-29 00:03:58.547488 | orchestrator | 2025-05-29 00:03:58.547495 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-05-29 00:03:58.597575 | orchestrator | skipping: [testbed-manager] 2025-05-29 00:03:58.597617 | orchestrator | 2025-05-29 00:03:58.597626 | orchestrator | TASK [Update package cache] **************************************************** 2025-05-29 00:03:58.627383 | orchestrator | skipping: [testbed-manager] 2025-05-29 00:03:58.627404 | orchestrator | 2025-05-29 00:03:58.627410 | orchestrator | TASK [Install required packages] *********************************************** 2025-05-29 00:03:58.657671 | orchestrator | skipping: [testbed-manager] 2025-05-29 00:03:58.657702 | orchestrator | 2025-05-29 00:03:58.657708 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-05-29 00:03:58.687057 | orchestrator | skipping: [testbed-manager] 2025-05-29 00:03:58.687083 | orchestrator | 2025-05-29 00:03:58.687088 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-05-29 00:03:58.712408 | orchestrator | skipping: [testbed-manager] 2025-05-29 00:03:58.712427 | orchestrator | 2025-05-29 00:03:58.712432 | orchestrator | TASK [Fail if Ubuntu version is lower than 22.04] ****************************** 2025-05-29 00:03:58.734572 | orchestrator | skipping: [testbed-manager] 2025-05-29 00:03:58.734593 | orchestrator | 2025-05-29 00:03:58.734599 | orchestrator | TASK [Fail if Debian version is lower than 12] ********************************* 2025-05-29 00:03:58.760657 | orchestrator | skipping: [testbed-manager] 2025-05-29 00:03:58.760675 | orchestrator | 2025-05-29 00:03:58.760681 | orchestrator | TASK [Set APT options on manager] ********************************************** 2025-05-29 00:03:59.562880 | orchestrator | changed: [testbed-manager] 2025-05-29 00:03:59.562935 | orchestrator | 2025-05-29 00:03:59.562944 | orchestrator | TASK [Update APT cache and run dist-upgrade] *********************************** 2025-05-29 00:07:47.446708 | orchestrator | changed: [testbed-manager] 2025-05-29 00:07:47.446812 | orchestrator | 2025-05-29 00:07:47.446832 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-05-29 00:09:18.205686 | orchestrator | changed: [testbed-manager] 2025-05-29 00:09:18.205852 | orchestrator | 2025-05-29 00:09:18.205873 | orchestrator | TASK [Install required packages] *********************************************** 2025-05-29 00:09:58.317899 | orchestrator | changed: [testbed-manager] 2025-05-29 00:09:58.318058 | orchestrator | 2025-05-29 00:09:58.318081 | orchestrator | TASK [Remove some python packages] ********************************************* 2025-05-29 00:10:07.217904 | orchestrator | changed: [testbed-manager] 2025-05-29 00:10:07.217998 | orchestrator | 2025-05-29 00:10:07.218042 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-05-29 00:10:07.266236 | orchestrator | ok: [testbed-manager] 2025-05-29 00:10:07.266312 | orchestrator | 2025-05-29 00:10:07.266327 | orchestrator | TASK [Get current user] ******************************************************** 2025-05-29 00:10:08.039961 | orchestrator | ok: [testbed-manager] 2025-05-29 00:10:08.040055 | orchestrator | 2025-05-29 00:10:08.040075 | orchestrator | TASK [Create venv directory] *************************************************** 2025-05-29 00:10:08.780525 | orchestrator | changed: [testbed-manager] 2025-05-29 00:10:08.780618 | orchestrator | 2025-05-29 00:10:08.780636 | orchestrator | TASK [Install netaddr in venv] ************************************************* 2025-05-29 00:10:15.207909 | orchestrator | changed: [testbed-manager] 2025-05-29 00:10:15.207992 | orchestrator | 2025-05-29 00:10:15.208022 | orchestrator | TASK [Install ansible-core in venv] ******************************************** 2025-05-29 00:10:21.242706 | orchestrator | changed: [testbed-manager] 2025-05-29 00:10:21.242826 | orchestrator | 2025-05-29 00:10:21.242844 | orchestrator | TASK [Install requests >= 2.32.2] ********************************************** 2025-05-29 00:10:23.819154 | orchestrator | changed: [testbed-manager] 2025-05-29 00:10:23.819237 | orchestrator | 2025-05-29 00:10:23.819253 | orchestrator | TASK [Install docker >= 7.1.0] ************************************************* 2025-05-29 00:10:25.614944 | orchestrator | changed: [testbed-manager] 2025-05-29 00:10:25.615011 | orchestrator | 2025-05-29 00:10:25.615020 | orchestrator | TASK [Create directories in /opt/src] ****************************************** 2025-05-29 00:10:26.893109 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-05-29 00:10:26.893151 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-05-29 00:10:26.893160 | orchestrator | 2025-05-29 00:10:26.893167 | orchestrator | TASK [Sync sources in /opt/src] ************************************************ 2025-05-29 00:10:26.938998 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-05-29 00:10:26.939070 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-05-29 00:10:26.939084 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-05-29 00:10:26.939097 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-05-29 00:10:30.141124 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-commons) 2025-05-29 00:10:30.141181 | orchestrator | changed: [testbed-manager] => (item=osism/ansible-collection-services) 2025-05-29 00:10:30.141192 | orchestrator | 2025-05-29 00:10:30.141202 | orchestrator | TASK [Create /usr/share/ansible directory] ************************************* 2025-05-29 00:10:30.763066 | orchestrator | changed: [testbed-manager] 2025-05-29 00:10:30.763164 | orchestrator | 2025-05-29 00:10:30.763183 | orchestrator | TASK [Install collections from Ansible galaxy] ********************************* 2025-05-29 00:10:51.231355 | orchestrator | changed: [testbed-manager] => (item=ansible.netcommon) 2025-05-29 00:10:51.231447 | orchestrator | changed: [testbed-manager] => (item=ansible.posix) 2025-05-29 00:10:51.231464 | orchestrator | changed: [testbed-manager] => (item=community.docker>=3.10.2) 2025-05-29 00:10:51.231477 | orchestrator | 2025-05-29 00:10:51.231489 | orchestrator | TASK [Install local collections] *********************************************** 2025-05-29 00:10:53.511960 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-commons) 2025-05-29 00:10:53.512036 | orchestrator | changed: [testbed-manager] => (item=ansible-collection-services) 2025-05-29 00:10:53.512052 | orchestrator | 2025-05-29 00:10:53.512064 | orchestrator | PLAY [Create operator user] **************************************************** 2025-05-29 00:10:53.512076 | orchestrator | 2025-05-29 00:10:53.512087 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-29 00:10:54.905829 | orchestrator | ok: [testbed-manager] 2025-05-29 00:10:54.905907 | orchestrator | 2025-05-29 00:10:54.905925 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-05-29 00:10:54.949881 | orchestrator | ok: [testbed-manager] 2025-05-29 00:10:54.949947 | orchestrator | 2025-05-29 00:10:54.949963 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-05-29 00:10:55.015083 | orchestrator | ok: [testbed-manager] 2025-05-29 00:10:55.015120 | orchestrator | 2025-05-29 00:10:55.015129 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-05-29 00:10:55.729527 | orchestrator | changed: [testbed-manager] 2025-05-29 00:10:55.729603 | orchestrator | 2025-05-29 00:10:55.729619 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-05-29 00:10:56.446994 | orchestrator | changed: [testbed-manager] 2025-05-29 00:10:56.447779 | orchestrator | 2025-05-29 00:10:56.447804 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-05-29 00:10:57.863434 | orchestrator | changed: [testbed-manager] => (item=adm) 2025-05-29 00:10:57.863480 | orchestrator | changed: [testbed-manager] => (item=sudo) 2025-05-29 00:10:57.863488 | orchestrator | 2025-05-29 00:10:57.863502 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-05-29 00:10:59.179508 | orchestrator | changed: [testbed-manager] 2025-05-29 00:10:59.179594 | orchestrator | 2025-05-29 00:10:59.179630 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-05-29 00:11:00.906633 | orchestrator | changed: [testbed-manager] => (item=export LANGUAGE=C.UTF-8) 2025-05-29 00:11:00.906736 | orchestrator | changed: [testbed-manager] => (item=export LANG=C.UTF-8) 2025-05-29 00:11:00.906747 | orchestrator | changed: [testbed-manager] => (item=export LC_ALL=C.UTF-8) 2025-05-29 00:11:00.906755 | orchestrator | 2025-05-29 00:11:00.906765 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-05-29 00:11:01.474480 | orchestrator | changed: [testbed-manager] 2025-05-29 00:11:01.475355 | orchestrator | 2025-05-29 00:11:01.475391 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-05-29 00:11:01.540590 | orchestrator | skipping: [testbed-manager] 2025-05-29 00:11:01.540667 | orchestrator | 2025-05-29 00:11:01.540683 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-05-29 00:11:02.398858 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-29 00:11:02.398955 | orchestrator | changed: [testbed-manager] 2025-05-29 00:11:02.398973 | orchestrator | 2025-05-29 00:11:02.398986 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-05-29 00:11:02.434427 | orchestrator | skipping: [testbed-manager] 2025-05-29 00:11:02.434512 | orchestrator | 2025-05-29 00:11:02.434529 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-05-29 00:11:02.470929 | orchestrator | skipping: [testbed-manager] 2025-05-29 00:11:02.470983 | orchestrator | 2025-05-29 00:11:02.470996 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-05-29 00:11:02.507919 | orchestrator | skipping: [testbed-manager] 2025-05-29 00:11:02.508003 | orchestrator | 2025-05-29 00:11:02.508018 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-05-29 00:11:02.569254 | orchestrator | skipping: [testbed-manager] 2025-05-29 00:11:02.569334 | orchestrator | 2025-05-29 00:11:02.569349 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-05-29 00:11:03.280114 | orchestrator | ok: [testbed-manager] 2025-05-29 00:11:03.280199 | orchestrator | 2025-05-29 00:11:03.280216 | orchestrator | PLAY [Run manager part 0] ****************************************************** 2025-05-29 00:11:03.280228 | orchestrator | 2025-05-29 00:11:03.280242 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-29 00:11:04.674535 | orchestrator | ok: [testbed-manager] 2025-05-29 00:11:04.674606 | orchestrator | 2025-05-29 00:11:04.674622 | orchestrator | TASK [Recursively change ownership of /opt/venv] ******************************* 2025-05-29 00:11:05.627491 | orchestrator | changed: [testbed-manager] 2025-05-29 00:11:05.627529 | orchestrator | 2025-05-29 00:11:05.627535 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-29 00:11:05.627540 | orchestrator | testbed-manager : ok=33 changed=23 unreachable=0 failed=0 skipped=12 rescued=0 ignored=0 2025-05-29 00:11:05.627545 | orchestrator | 2025-05-29 00:11:05.794919 | orchestrator | ok: Runtime: 0:07:12.462768 2025-05-29 00:11:05.804347 | 2025-05-29 00:11:05.804461 | TASK [Point out that the log in on the manager is now possible] 2025-05-29 00:11:05.849075 | orchestrator | ok: It is now already possible to log in to the manager with 'make login'. 2025-05-29 00:11:05.858359 | 2025-05-29 00:11:05.858488 | TASK [Point out that the following task takes some time and does not give any output] 2025-05-29 00:11:05.892101 | orchestrator | ok: The task 'Run manager part 1 + 2' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minuts for this task to complete. 2025-05-29 00:11:05.899317 | 2025-05-29 00:11:05.899436 | TASK [Run manager part 1 + 2] 2025-05-29 00:11:06.788793 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.2 2025-05-29 00:11:06.841005 | orchestrator | 2025-05-29 00:11:06.841051 | orchestrator | PLAY [Run manager part 1] ****************************************************** 2025-05-29 00:11:06.841058 | orchestrator | 2025-05-29 00:11:06.841070 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-29 00:11:09.330421 | orchestrator | ok: [testbed-manager] 2025-05-29 00:11:09.330480 | orchestrator | 2025-05-29 00:11:09.330513 | orchestrator | TASK [Set venv_command fact (RedHat)] ****************************************** 2025-05-29 00:11:09.367981 | orchestrator | skipping: [testbed-manager] 2025-05-29 00:11:09.368037 | orchestrator | 2025-05-29 00:11:09.368048 | orchestrator | TASK [Set venv_command fact (Debian)] ****************************************** 2025-05-29 00:11:09.416037 | orchestrator | ok: [testbed-manager] 2025-05-29 00:11:09.416091 | orchestrator | 2025-05-29 00:11:09.416102 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-05-29 00:11:09.459359 | orchestrator | ok: [testbed-manager] 2025-05-29 00:11:09.459410 | orchestrator | 2025-05-29 00:11:09.459421 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-05-29 00:11:09.527860 | orchestrator | ok: [testbed-manager] 2025-05-29 00:11:09.527917 | orchestrator | 2025-05-29 00:11:09.527928 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-05-29 00:11:09.588145 | orchestrator | ok: [testbed-manager] 2025-05-29 00:11:09.588197 | orchestrator | 2025-05-29 00:11:09.588208 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-05-29 00:11:09.635673 | orchestrator | included: /home/zuul-testbed01/.ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager 2025-05-29 00:11:09.635770 | orchestrator | 2025-05-29 00:11:09.635778 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-05-29 00:11:10.385862 | orchestrator | ok: [testbed-manager] 2025-05-29 00:11:10.385919 | orchestrator | 2025-05-29 00:11:10.385929 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-05-29 00:11:10.435833 | orchestrator | skipping: [testbed-manager] 2025-05-29 00:11:10.435885 | orchestrator | 2025-05-29 00:11:10.435894 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-05-29 00:11:11.865403 | orchestrator | changed: [testbed-manager] 2025-05-29 00:11:11.865476 | orchestrator | 2025-05-29 00:11:11.865490 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-05-29 00:11:12.457961 | orchestrator | ok: [testbed-manager] 2025-05-29 00:11:12.458012 | orchestrator | 2025-05-29 00:11:12.458052 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-05-29 00:11:13.681849 | orchestrator | changed: [testbed-manager] 2025-05-29 00:11:13.681904 | orchestrator | 2025-05-29 00:11:13.681919 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-05-29 00:11:26.537867 | orchestrator | changed: [testbed-manager] 2025-05-29 00:11:26.538060 | orchestrator | 2025-05-29 00:11:26.538082 | orchestrator | TASK [Get home directory of ansible user] ************************************** 2025-05-29 00:11:27.214809 | orchestrator | ok: [testbed-manager] 2025-05-29 00:11:27.214902 | orchestrator | 2025-05-29 00:11:27.214921 | orchestrator | TASK [Set repo_path fact] ****************************************************** 2025-05-29 00:11:27.267798 | orchestrator | skipping: [testbed-manager] 2025-05-29 00:11:27.267873 | orchestrator | 2025-05-29 00:11:27.267887 | orchestrator | TASK [Copy SSH public key] ***************************************************** 2025-05-29 00:11:28.227504 | orchestrator | changed: [testbed-manager] 2025-05-29 00:11:28.227594 | orchestrator | 2025-05-29 00:11:28.227610 | orchestrator | TASK [Copy SSH private key] **************************************************** 2025-05-29 00:11:29.213932 | orchestrator | changed: [testbed-manager] 2025-05-29 00:11:29.214056 | orchestrator | 2025-05-29 00:11:29.214077 | orchestrator | TASK [Create configuration directory] ****************************************** 2025-05-29 00:11:29.794324 | orchestrator | changed: [testbed-manager] 2025-05-29 00:11:29.794407 | orchestrator | 2025-05-29 00:11:29.794423 | orchestrator | TASK [Copy testbed repo] ******************************************************* 2025-05-29 00:11:29.834468 | orchestrator | [DEPRECATION WARNING]: The connection's stdin object is deprecated. Call 2025-05-29 00:11:29.834555 | orchestrator | display.prompt_until(msg) instead. This feature will be removed in version 2025-05-29 00:11:29.834569 | orchestrator | 2.19. Deprecation warnings can be disabled by setting 2025-05-29 00:11:29.834581 | orchestrator | deprecation_warnings=False in ansible.cfg. 2025-05-29 00:11:31.946075 | orchestrator | changed: [testbed-manager] 2025-05-29 00:11:31.946153 | orchestrator | 2025-05-29 00:11:31.946170 | orchestrator | TASK [Install python requirements in venv] ************************************* 2025-05-29 00:11:40.789965 | orchestrator | ok: [testbed-manager] => (item=Jinja2) 2025-05-29 00:11:40.790103 | orchestrator | ok: [testbed-manager] => (item=PyYAML) 2025-05-29 00:11:40.790125 | orchestrator | ok: [testbed-manager] => (item=packaging) 2025-05-29 00:11:40.790138 | orchestrator | changed: [testbed-manager] => (item=python-gilt==1.2.3) 2025-05-29 00:11:40.790160 | orchestrator | ok: [testbed-manager] => (item=requests>=2.32.2) 2025-05-29 00:11:40.790171 | orchestrator | ok: [testbed-manager] => (item=docker>=7.1.0) 2025-05-29 00:11:40.790183 | orchestrator | 2025-05-29 00:11:40.790196 | orchestrator | TASK [Copy testbed custom CA certificate on Debian/Ubuntu] ********************* 2025-05-29 00:11:41.850806 | orchestrator | changed: [testbed-manager] 2025-05-29 00:11:41.850893 | orchestrator | 2025-05-29 00:11:41.850911 | orchestrator | TASK [Copy testbed custom CA certificate on CentOS] **************************** 2025-05-29 00:11:41.895316 | orchestrator | skipping: [testbed-manager] 2025-05-29 00:11:41.895394 | orchestrator | 2025-05-29 00:11:41.895409 | orchestrator | TASK [Run update-ca-certificates on Debian/Ubuntu] ***************************** 2025-05-29 00:11:45.087594 | orchestrator | changed: [testbed-manager] 2025-05-29 00:11:45.087637 | orchestrator | 2025-05-29 00:11:45.087646 | orchestrator | TASK [Run update-ca-trust on RedHat] ******************************************* 2025-05-29 00:11:45.130184 | orchestrator | skipping: [testbed-manager] 2025-05-29 00:11:45.130280 | orchestrator | 2025-05-29 00:11:45.130295 | orchestrator | TASK [Run manager part 2] ****************************************************** 2025-05-29 00:13:24.116653 | orchestrator | changed: [testbed-manager] 2025-05-29 00:13:24.116695 | orchestrator | 2025-05-29 00:13:24.116753 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-05-29 00:13:25.230949 | orchestrator | ok: [testbed-manager] 2025-05-29 00:13:25.231040 | orchestrator | 2025-05-29 00:13:25.231058 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-29 00:13:25.231073 | orchestrator | testbed-manager : ok=21 changed=11 unreachable=0 failed=0 skipped=5 rescued=0 ignored=0 2025-05-29 00:13:25.231084 | orchestrator | 2025-05-29 00:13:25.525067 | orchestrator | ok: Runtime: 0:02:19.071164 2025-05-29 00:13:25.541526 | 2025-05-29 00:13:25.541660 | TASK [Reboot manager] 2025-05-29 00:13:27.078948 | orchestrator | ok: Runtime: 0:00:00.937799 2025-05-29 00:13:27.094501 | 2025-05-29 00:13:27.094661 | TASK [Wait up to 300 seconds for port 22 to become open and contain "OpenSSH"] 2025-05-29 00:13:41.557419 | orchestrator | ok 2025-05-29 00:13:41.567855 | 2025-05-29 00:13:41.567987 | TASK [Wait a little longer for the manager so that everything is ready] 2025-05-29 00:14:41.616496 | orchestrator | ok 2025-05-29 00:14:41.626664 | 2025-05-29 00:14:41.626799 | TASK [Deploy manager + bootstrap nodes] 2025-05-29 00:14:44.109008 | orchestrator | 2025-05-29 00:14:44.109238 | orchestrator | # DEPLOY MANAGER 2025-05-29 00:14:44.109263 | orchestrator | 2025-05-29 00:14:44.109277 | orchestrator | + set -e 2025-05-29 00:14:44.109291 | orchestrator | + echo 2025-05-29 00:14:44.109304 | orchestrator | + echo '# DEPLOY MANAGER' 2025-05-29 00:14:44.109322 | orchestrator | + echo 2025-05-29 00:14:44.109368 | orchestrator | + cat /opt/manager-vars.sh 2025-05-29 00:14:44.112835 | orchestrator | export NUMBER_OF_NODES=6 2025-05-29 00:14:44.112883 | orchestrator | 2025-05-29 00:14:44.112896 | orchestrator | export CEPH_VERSION=reef 2025-05-29 00:14:44.112908 | orchestrator | export CONFIGURATION_VERSION=main 2025-05-29 00:14:44.112920 | orchestrator | export MANAGER_VERSION=8.1.0 2025-05-29 00:14:44.112943 | orchestrator | export OPENSTACK_VERSION=2024.2 2025-05-29 00:14:44.112954 | orchestrator | 2025-05-29 00:14:44.112972 | orchestrator | export ARA=false 2025-05-29 00:14:44.112983 | orchestrator | export TEMPEST=false 2025-05-29 00:14:44.113000 | orchestrator | export IS_ZUUL=true 2025-05-29 00:14:44.113011 | orchestrator | 2025-05-29 00:14:44.113029 | orchestrator | export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.2 2025-05-29 00:14:44.113040 | orchestrator | export EXTERNAL_API=false 2025-05-29 00:14:44.113051 | orchestrator | 2025-05-29 00:14:44.113073 | orchestrator | export IMAGE_USER=ubuntu 2025-05-29 00:14:44.113084 | orchestrator | export IMAGE_NODE_USER=ubuntu 2025-05-29 00:14:44.113095 | orchestrator | 2025-05-29 00:14:44.113108 | orchestrator | export CEPH_STACK=ceph-ansible 2025-05-29 00:14:44.113128 | orchestrator | 2025-05-29 00:14:44.113139 | orchestrator | + echo 2025-05-29 00:14:44.113150 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-05-29 00:14:44.114245 | orchestrator | ++ export INTERACTIVE=false 2025-05-29 00:14:44.114267 | orchestrator | ++ INTERACTIVE=false 2025-05-29 00:14:44.114283 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-05-29 00:14:44.114302 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-05-29 00:14:44.114330 | orchestrator | + source /opt/manager-vars.sh 2025-05-29 00:14:44.114348 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-05-29 00:14:44.114366 | orchestrator | ++ NUMBER_OF_NODES=6 2025-05-29 00:14:44.114400 | orchestrator | ++ export CEPH_VERSION=reef 2025-05-29 00:14:44.114421 | orchestrator | ++ CEPH_VERSION=reef 2025-05-29 00:14:44.114439 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-05-29 00:14:44.114459 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-05-29 00:14:44.114479 | orchestrator | ++ export MANAGER_VERSION=8.1.0 2025-05-29 00:14:44.114499 | orchestrator | ++ MANAGER_VERSION=8.1.0 2025-05-29 00:14:44.114524 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-05-29 00:14:44.114539 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-05-29 00:14:44.114550 | orchestrator | ++ export ARA=false 2025-05-29 00:14:44.114561 | orchestrator | ++ ARA=false 2025-05-29 00:14:44.114582 | orchestrator | ++ export TEMPEST=false 2025-05-29 00:14:44.114593 | orchestrator | ++ TEMPEST=false 2025-05-29 00:14:44.114603 | orchestrator | ++ export IS_ZUUL=true 2025-05-29 00:14:44.114614 | orchestrator | ++ IS_ZUUL=true 2025-05-29 00:14:44.114625 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.2 2025-05-29 00:14:44.114636 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.2 2025-05-29 00:14:44.114646 | orchestrator | ++ export EXTERNAL_API=false 2025-05-29 00:14:44.114657 | orchestrator | ++ EXTERNAL_API=false 2025-05-29 00:14:44.114667 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-05-29 00:14:44.114678 | orchestrator | ++ IMAGE_USER=ubuntu 2025-05-29 00:14:44.114689 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-05-29 00:14:44.114720 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-05-29 00:14:44.114731 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-05-29 00:14:44.114742 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-05-29 00:14:44.114753 | orchestrator | + sudo ln -sf /opt/configuration/contrib/semver2.sh /usr/local/bin/semver 2025-05-29 00:14:44.173472 | orchestrator | + docker version 2025-05-29 00:14:44.443238 | orchestrator | Client: Docker Engine - Community 2025-05-29 00:14:44.443342 | orchestrator | Version: 26.1.4 2025-05-29 00:14:44.443362 | orchestrator | API version: 1.45 2025-05-29 00:14:44.443374 | orchestrator | Go version: go1.21.11 2025-05-29 00:14:44.443385 | orchestrator | Git commit: 5650f9b 2025-05-29 00:14:44.443396 | orchestrator | Built: Wed Jun 5 11:28:57 2024 2025-05-29 00:14:44.443408 | orchestrator | OS/Arch: linux/amd64 2025-05-29 00:14:44.443419 | orchestrator | Context: default 2025-05-29 00:14:44.443430 | orchestrator | 2025-05-29 00:14:44.443441 | orchestrator | Server: Docker Engine - Community 2025-05-29 00:14:44.443452 | orchestrator | Engine: 2025-05-29 00:14:44.443463 | orchestrator | Version: 26.1.4 2025-05-29 00:14:44.443473 | orchestrator | API version: 1.45 (minimum version 1.24) 2025-05-29 00:14:44.443484 | orchestrator | Go version: go1.21.11 2025-05-29 00:14:44.443495 | orchestrator | Git commit: de5c9cf 2025-05-29 00:14:44.443550 | orchestrator | Built: Wed Jun 5 11:28:57 2024 2025-05-29 00:14:44.443563 | orchestrator | OS/Arch: linux/amd64 2025-05-29 00:14:44.443574 | orchestrator | Experimental: false 2025-05-29 00:14:44.443584 | orchestrator | containerd: 2025-05-29 00:14:44.443596 | orchestrator | Version: 1.7.27 2025-05-29 00:14:44.443607 | orchestrator | GitCommit: 05044ec0a9a75232cad458027ca83437aae3f4da 2025-05-29 00:14:44.443618 | orchestrator | runc: 2025-05-29 00:14:44.443629 | orchestrator | Version: 1.2.5 2025-05-29 00:14:44.443640 | orchestrator | GitCommit: v1.2.5-0-g59923ef 2025-05-29 00:14:44.443651 | orchestrator | docker-init: 2025-05-29 00:14:44.443662 | orchestrator | Version: 0.19.0 2025-05-29 00:14:44.443673 | orchestrator | GitCommit: de40ad0 2025-05-29 00:14:44.446868 | orchestrator | + sh -c /opt/configuration/scripts/deploy/000-manager.sh 2025-05-29 00:14:44.455121 | orchestrator | + set -e 2025-05-29 00:14:44.455199 | orchestrator | + source /opt/manager-vars.sh 2025-05-29 00:14:44.455215 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-05-29 00:14:44.455227 | orchestrator | ++ NUMBER_OF_NODES=6 2025-05-29 00:14:44.455238 | orchestrator | ++ export CEPH_VERSION=reef 2025-05-29 00:14:44.455257 | orchestrator | ++ CEPH_VERSION=reef 2025-05-29 00:14:44.455268 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-05-29 00:14:44.455281 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-05-29 00:14:44.455292 | orchestrator | ++ export MANAGER_VERSION=8.1.0 2025-05-29 00:14:44.455303 | orchestrator | ++ MANAGER_VERSION=8.1.0 2025-05-29 00:14:44.455314 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-05-29 00:14:44.455324 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-05-29 00:14:44.455335 | orchestrator | ++ export ARA=false 2025-05-29 00:14:44.455346 | orchestrator | ++ ARA=false 2025-05-29 00:14:44.455357 | orchestrator | ++ export TEMPEST=false 2025-05-29 00:14:44.455367 | orchestrator | ++ TEMPEST=false 2025-05-29 00:14:44.455378 | orchestrator | ++ export IS_ZUUL=true 2025-05-29 00:14:44.455389 | orchestrator | ++ IS_ZUUL=true 2025-05-29 00:14:44.455399 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.2 2025-05-29 00:14:44.455411 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.2 2025-05-29 00:14:44.455422 | orchestrator | ++ export EXTERNAL_API=false 2025-05-29 00:14:44.455432 | orchestrator | ++ EXTERNAL_API=false 2025-05-29 00:14:44.455443 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-05-29 00:14:44.455453 | orchestrator | ++ IMAGE_USER=ubuntu 2025-05-29 00:14:44.455464 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-05-29 00:14:44.455475 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-05-29 00:14:44.455486 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-05-29 00:14:44.455496 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-05-29 00:14:44.455507 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-05-29 00:14:44.455517 | orchestrator | ++ export INTERACTIVE=false 2025-05-29 00:14:44.455528 | orchestrator | ++ INTERACTIVE=false 2025-05-29 00:14:44.455539 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-05-29 00:14:44.455550 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-05-29 00:14:44.455571 | orchestrator | + [[ 8.1.0 != \l\a\t\e\s\t ]] 2025-05-29 00:14:44.455583 | orchestrator | + /opt/configuration/scripts/set-manager-version.sh 8.1.0 2025-05-29 00:14:44.462547 | orchestrator | + set -e 2025-05-29 00:14:44.463274 | orchestrator | + VERSION=8.1.0 2025-05-29 00:14:44.463296 | orchestrator | + sed -i 's/manager_version: .*/manager_version: 8.1.0/g' /opt/configuration/environments/manager/configuration.yml 2025-05-29 00:14:44.471494 | orchestrator | + [[ 8.1.0 != \l\a\t\e\s\t ]] 2025-05-29 00:14:44.471520 | orchestrator | + sed -i /ceph_version:/d /opt/configuration/environments/manager/configuration.yml 2025-05-29 00:14:44.476406 | orchestrator | + sed -i /openstack_version:/d /opt/configuration/environments/manager/configuration.yml 2025-05-29 00:14:44.480205 | orchestrator | + sh -c /opt/configuration/scripts/sync-configuration-repository.sh 2025-05-29 00:14:44.487635 | orchestrator | /opt/configuration ~ 2025-05-29 00:14:44.487766 | orchestrator | + set -e 2025-05-29 00:14:44.487791 | orchestrator | + pushd /opt/configuration 2025-05-29 00:14:44.487807 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-05-29 00:14:44.489081 | orchestrator | + source /opt/venv/bin/activate 2025-05-29 00:14:44.490515 | orchestrator | ++ deactivate nondestructive 2025-05-29 00:14:44.490596 | orchestrator | ++ '[' -n '' ']' 2025-05-29 00:14:44.490611 | orchestrator | ++ '[' -n '' ']' 2025-05-29 00:14:44.490624 | orchestrator | ++ hash -r 2025-05-29 00:14:44.490635 | orchestrator | ++ '[' -n '' ']' 2025-05-29 00:14:44.490645 | orchestrator | ++ unset VIRTUAL_ENV 2025-05-29 00:14:44.490657 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2025-05-29 00:14:44.490673 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2025-05-29 00:14:44.490685 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2025-05-29 00:14:44.490762 | orchestrator | ++ '[' linux-gnu = msys ']' 2025-05-29 00:14:44.490778 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2025-05-29 00:14:44.490789 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2025-05-29 00:14:44.490811 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-05-29 00:14:44.490823 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-05-29 00:14:44.490834 | orchestrator | ++ export PATH 2025-05-29 00:14:44.490845 | orchestrator | ++ '[' -n '' ']' 2025-05-29 00:14:44.490856 | orchestrator | ++ '[' -z '' ']' 2025-05-29 00:14:44.490866 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2025-05-29 00:14:44.490877 | orchestrator | ++ PS1='(venv) ' 2025-05-29 00:14:44.490888 | orchestrator | ++ export PS1 2025-05-29 00:14:44.490898 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2025-05-29 00:14:44.490909 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2025-05-29 00:14:44.490920 | orchestrator | ++ hash -r 2025-05-29 00:14:44.490932 | orchestrator | + pip3 install --no-cache-dir python-gilt==1.2.3 requests Jinja2 PyYAML packaging 2025-05-29 00:14:45.551047 | orchestrator | Requirement already satisfied: python-gilt==1.2.3 in /opt/venv/lib/python3.12/site-packages (1.2.3) 2025-05-29 00:14:45.551869 | orchestrator | Requirement already satisfied: requests in /opt/venv/lib/python3.12/site-packages (2.32.3) 2025-05-29 00:14:45.553143 | orchestrator | Requirement already satisfied: Jinja2 in /opt/venv/lib/python3.12/site-packages (3.1.6) 2025-05-29 00:14:45.554223 | orchestrator | Requirement already satisfied: PyYAML in /opt/venv/lib/python3.12/site-packages (6.0.2) 2025-05-29 00:14:45.555381 | orchestrator | Requirement already satisfied: packaging in /opt/venv/lib/python3.12/site-packages (25.0) 2025-05-29 00:14:45.565197 | orchestrator | Requirement already satisfied: click in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (8.2.1) 2025-05-29 00:14:45.566681 | orchestrator | Requirement already satisfied: colorama in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.4.6) 2025-05-29 00:14:45.567719 | orchestrator | Requirement already satisfied: fasteners in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (0.19) 2025-05-29 00:14:45.568977 | orchestrator | Requirement already satisfied: sh in /opt/venv/lib/python3.12/site-packages (from python-gilt==1.2.3) (2.2.2) 2025-05-29 00:14:45.599456 | orchestrator | Requirement already satisfied: charset-normalizer<4,>=2 in /opt/venv/lib/python3.12/site-packages (from requests) (3.4.2) 2025-05-29 00:14:45.600769 | orchestrator | Requirement already satisfied: idna<4,>=2.5 in /opt/venv/lib/python3.12/site-packages (from requests) (3.10) 2025-05-29 00:14:45.602232 | orchestrator | Requirement already satisfied: urllib3<3,>=1.21.1 in /opt/venv/lib/python3.12/site-packages (from requests) (2.4.0) 2025-05-29 00:14:45.603785 | orchestrator | Requirement already satisfied: certifi>=2017.4.17 in /opt/venv/lib/python3.12/site-packages (from requests) (2025.4.26) 2025-05-29 00:14:45.607758 | orchestrator | Requirement already satisfied: MarkupSafe>=2.0 in /opt/venv/lib/python3.12/site-packages (from Jinja2) (3.0.2) 2025-05-29 00:14:45.807166 | orchestrator | ++ which gilt 2025-05-29 00:14:45.887779 | orchestrator | + GILT=/opt/venv/bin/gilt 2025-05-29 00:14:45.887845 | orchestrator | + /opt/venv/bin/gilt overlay 2025-05-29 00:14:46.010215 | orchestrator | osism.cfg-generics: 2025-05-29 00:14:46.010305 | orchestrator | - cloning osism.cfg-generics to /home/dragon/.gilt/clone/github.com/osism.cfg-generics 2025-05-29 00:14:47.575816 | orchestrator | - copied (main) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/environments/manager/images.yml to /opt/configuration/environments/manager/ 2025-05-29 00:14:47.575909 | orchestrator | - copied (main) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/render-images.py to /opt/configuration/environments/manager/ 2025-05-29 00:14:47.576173 | orchestrator | - copied (main) /home/dragon/.gilt/clone/github.com/osism.cfg-generics/src/set-versions.py to /opt/configuration/environments/ 2025-05-29 00:14:47.576196 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh render-images` in /opt/configuration/environments/manager/ 2025-05-29 00:14:48.456745 | orchestrator | - running `rm render-images.py` in /opt/configuration/environments/manager/ 2025-05-29 00:14:48.467011 | orchestrator | - running `/opt/configuration/scripts/wrapper-gilt.sh set-versions` in /opt/configuration/environments/ 2025-05-29 00:14:48.949502 | orchestrator | - running `rm set-versions.py` in /opt/configuration/environments/ 2025-05-29 00:14:48.996249 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-05-29 00:14:48.996363 | orchestrator | + deactivate 2025-05-29 00:14:48.996379 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2025-05-29 00:14:48.996393 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-05-29 00:14:48.996405 | orchestrator | + export PATH 2025-05-29 00:14:48.996416 | orchestrator | + unset _OLD_VIRTUAL_PATH 2025-05-29 00:14:48.996428 | orchestrator | + '[' -n '' ']' 2025-05-29 00:14:48.996439 | orchestrator | + hash -r 2025-05-29 00:14:48.996450 | orchestrator | + '[' -n '' ']' 2025-05-29 00:14:48.996461 | orchestrator | + unset VIRTUAL_ENV 2025-05-29 00:14:48.996471 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2025-05-29 00:14:48.996482 | orchestrator | + '[' '!' '' = nondestructive ']' 2025-05-29 00:14:48.996493 | orchestrator | + unset -f deactivate 2025-05-29 00:14:48.996504 | orchestrator | ~ 2025-05-29 00:14:48.996515 | orchestrator | + popd 2025-05-29 00:14:48.998268 | orchestrator | + [[ 8.1.0 == \l\a\t\e\s\t ]] 2025-05-29 00:14:48.998365 | orchestrator | + [[ ceph-ansible == \r\o\o\k ]] 2025-05-29 00:14:48.998997 | orchestrator | ++ semver 8.1.0 7.0.0 2025-05-29 00:14:49.048881 | orchestrator | + [[ 1 -ge 0 ]] 2025-05-29 00:14:49.048981 | orchestrator | + echo 'enable_osism_kubernetes: true' 2025-05-29 00:14:49.048998 | orchestrator | + /opt/configuration/scripts/enable-resource-nodes.sh 2025-05-29 00:14:49.095562 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-05-29 00:14:49.095683 | orchestrator | + source /opt/venv/bin/activate 2025-05-29 00:14:49.095756 | orchestrator | ++ deactivate nondestructive 2025-05-29 00:14:49.095769 | orchestrator | ++ '[' -n '' ']' 2025-05-29 00:14:49.095780 | orchestrator | ++ '[' -n '' ']' 2025-05-29 00:14:49.095798 | orchestrator | ++ hash -r 2025-05-29 00:14:49.095810 | orchestrator | ++ '[' -n '' ']' 2025-05-29 00:14:49.095821 | orchestrator | ++ unset VIRTUAL_ENV 2025-05-29 00:14:49.095832 | orchestrator | ++ unset VIRTUAL_ENV_PROMPT 2025-05-29 00:14:49.095843 | orchestrator | ++ '[' '!' nondestructive = nondestructive ']' 2025-05-29 00:14:49.095869 | orchestrator | ++ '[' linux-gnu = cygwin ']' 2025-05-29 00:14:49.095880 | orchestrator | ++ '[' linux-gnu = msys ']' 2025-05-29 00:14:49.095891 | orchestrator | ++ export VIRTUAL_ENV=/opt/venv 2025-05-29 00:14:49.095902 | orchestrator | ++ VIRTUAL_ENV=/opt/venv 2025-05-29 00:14:49.095914 | orchestrator | ++ _OLD_VIRTUAL_PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-05-29 00:14:49.095926 | orchestrator | ++ PATH=/opt/venv/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-05-29 00:14:49.095937 | orchestrator | ++ export PATH 2025-05-29 00:14:49.095948 | orchestrator | ++ '[' -n '' ']' 2025-05-29 00:14:49.095959 | orchestrator | ++ '[' -z '' ']' 2025-05-29 00:14:49.095970 | orchestrator | ++ _OLD_VIRTUAL_PS1= 2025-05-29 00:14:49.095981 | orchestrator | ++ PS1='(venv) ' 2025-05-29 00:14:49.095992 | orchestrator | ++ export PS1 2025-05-29 00:14:49.096002 | orchestrator | ++ VIRTUAL_ENV_PROMPT='(venv) ' 2025-05-29 00:14:49.096013 | orchestrator | ++ export VIRTUAL_ENV_PROMPT 2025-05-29 00:14:49.096024 | orchestrator | ++ hash -r 2025-05-29 00:14:49.096035 | orchestrator | + ansible-playbook -i testbed-manager, --vault-password-file /opt/configuration/environments/.vault_pass /opt/configuration/ansible/manager-part-3.yml 2025-05-29 00:14:50.260629 | orchestrator | 2025-05-29 00:14:50.260824 | orchestrator | PLAY [Copy custom facts] ******************************************************* 2025-05-29 00:14:50.260846 | orchestrator | 2025-05-29 00:14:50.260858 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-05-29 00:14:50.849614 | orchestrator | ok: [testbed-manager] 2025-05-29 00:14:50.850568 | orchestrator | 2025-05-29 00:14:50.850621 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-05-29 00:14:51.821248 | orchestrator | changed: [testbed-manager] 2025-05-29 00:14:51.821358 | orchestrator | 2025-05-29 00:14:51.821375 | orchestrator | PLAY [Before the deployment of the manager] ************************************ 2025-05-29 00:14:51.821388 | orchestrator | 2025-05-29 00:14:51.821399 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-29 00:14:54.161458 | orchestrator | ok: [testbed-manager] 2025-05-29 00:14:54.161541 | orchestrator | 2025-05-29 00:14:54.161547 | orchestrator | TASK [Pull images] ************************************************************* 2025-05-29 00:14:59.711111 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/ara-server:1.7.2) 2025-05-29 00:14:59.711206 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/dockerhub/library/mariadb:11.6.2) 2025-05-29 00:14:59.711217 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/ceph-ansible:8.1.0) 2025-05-29 00:14:59.711224 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/inventory-reconciler:8.1.0) 2025-05-29 00:14:59.711231 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/kolla-ansible:8.1.0) 2025-05-29 00:14:59.711242 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/dockerhub/library/redis:7.4.1-alpine) 2025-05-29 00:14:59.711249 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/netbox:v4.1.7) 2025-05-29 00:14:59.711258 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/osism-ansible:8.1.0) 2025-05-29 00:14:59.711265 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/osism/osism:0.20241219.2) 2025-05-29 00:14:59.711271 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/dockerhub/library/postgres:16.6-alpine) 2025-05-29 00:14:59.711278 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/dockerhub/library/traefik:v3.2.1) 2025-05-29 00:14:59.711285 | orchestrator | changed: [testbed-manager] => (item=registry.osism.tech/dockerhub/hashicorp/vault:1.18.2) 2025-05-29 00:14:59.711292 | orchestrator | 2025-05-29 00:14:59.711300 | orchestrator | TASK [Check status] ************************************************************ 2025-05-29 00:16:15.754155 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (120 retries left). 2025-05-29 00:16:15.754301 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (119 retries left). 2025-05-29 00:16:15.754316 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (118 retries left). 2025-05-29 00:16:15.754328 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (117 retries left). 2025-05-29 00:16:15.754354 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j588237126426.1590', 'results_file': '/home/dragon/.ansible_async/j588237126426.1590', 'changed': True, 'item': 'registry.osism.tech/osism/ara-server:1.7.2', 'ansible_loop_var': 'item'}) 2025-05-29 00:16:15.754376 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j217623725191.1615', 'results_file': '/home/dragon/.ansible_async/j217623725191.1615', 'changed': True, 'item': 'registry.osism.tech/dockerhub/library/mariadb:11.6.2', 'ansible_loop_var': 'item'}) 2025-05-29 00:16:15.754393 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (120 retries left). 2025-05-29 00:16:15.754404 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (119 retries left). 2025-05-29 00:16:15.754415 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j92514295795.1640', 'results_file': '/home/dragon/.ansible_async/j92514295795.1640', 'changed': True, 'item': 'registry.osism.tech/osism/ceph-ansible:8.1.0', 'ansible_loop_var': 'item'}) 2025-05-29 00:16:15.754427 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j39824236915.1672', 'results_file': '/home/dragon/.ansible_async/j39824236915.1672', 'changed': True, 'item': 'registry.osism.tech/osism/inventory-reconciler:8.1.0', 'ansible_loop_var': 'item'}) 2025-05-29 00:16:15.754439 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j658015737645.1704', 'results_file': '/home/dragon/.ansible_async/j658015737645.1704', 'changed': True, 'item': 'registry.osism.tech/osism/kolla-ansible:8.1.0', 'ansible_loop_var': 'item'}) 2025-05-29 00:16:15.754450 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j719705285428.1736', 'results_file': '/home/dragon/.ansible_async/j719705285428.1736', 'changed': True, 'item': 'registry.osism.tech/dockerhub/library/redis:7.4.1-alpine', 'ansible_loop_var': 'item'}) 2025-05-29 00:16:15.754461 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check status (120 retries left). 2025-05-29 00:16:15.754510 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j360044314265.1769', 'results_file': '/home/dragon/.ansible_async/j360044314265.1769', 'changed': True, 'item': 'registry.osism.tech/osism/netbox:v4.1.7', 'ansible_loop_var': 'item'}) 2025-05-29 00:16:15.754522 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j404047722792.1803', 'results_file': '/home/dragon/.ansible_async/j404047722792.1803', 'changed': True, 'item': 'registry.osism.tech/osism/osism-ansible:8.1.0', 'ansible_loop_var': 'item'}) 2025-05-29 00:16:15.754533 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j297228244010.1843', 'results_file': '/home/dragon/.ansible_async/j297228244010.1843', 'changed': True, 'item': 'registry.osism.tech/osism/osism:0.20241219.2', 'ansible_loop_var': 'item'}) 2025-05-29 00:16:15.754544 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j864338519577.1875', 'results_file': '/home/dragon/.ansible_async/j864338519577.1875', 'changed': True, 'item': 'registry.osism.tech/dockerhub/library/postgres:16.6-alpine', 'ansible_loop_var': 'item'}) 2025-05-29 00:16:15.754556 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j728716820606.1908', 'results_file': '/home/dragon/.ansible_async/j728716820606.1908', 'changed': True, 'item': 'registry.osism.tech/dockerhub/library/traefik:v3.2.1', 'ansible_loop_var': 'item'}) 2025-05-29 00:16:15.754569 | orchestrator | changed: [testbed-manager] => (item={'failed': 0, 'started': 1, 'finished': 0, 'ansible_job_id': 'j2596507930.1940', 'results_file': '/home/dragon/.ansible_async/j2596507930.1940', 'changed': True, 'item': 'registry.osism.tech/dockerhub/hashicorp/vault:1.18.2', 'ansible_loop_var': 'item'}) 2025-05-29 00:16:15.754581 | orchestrator | 2025-05-29 00:16:15.754594 | orchestrator | TASK [Get /opt/manager-vars.sh] ************************************************ 2025-05-29 00:16:15.807895 | orchestrator | ok: [testbed-manager] 2025-05-29 00:16:15.808029 | orchestrator | 2025-05-29 00:16:15.808049 | orchestrator | TASK [Add ara_server_mariadb_volume_type parameter] **************************** 2025-05-29 00:16:16.285963 | orchestrator | changed: [testbed-manager] 2025-05-29 00:16:16.286158 | orchestrator | 2025-05-29 00:16:16.286176 | orchestrator | TASK [Add netbox_postgres_volume_type parameter] ******************************* 2025-05-29 00:16:16.634112 | orchestrator | changed: [testbed-manager] 2025-05-29 00:16:16.634239 | orchestrator | 2025-05-29 00:16:16.634253 | orchestrator | TASK [Install HWE kernel package on Ubuntu] ************************************ 2025-05-29 00:16:16.975920 | orchestrator | changed: [testbed-manager] 2025-05-29 00:16:16.976035 | orchestrator | 2025-05-29 00:16:16.976051 | orchestrator | TASK [Use insecure glance configuration] *************************************** 2025-05-29 00:16:17.033161 | orchestrator | skipping: [testbed-manager] 2025-05-29 00:16:17.033263 | orchestrator | 2025-05-29 00:16:17.033278 | orchestrator | TASK [Check if /etc/OTC_region exist] ****************************************** 2025-05-29 00:16:17.372730 | orchestrator | ok: [testbed-manager] 2025-05-29 00:16:17.372833 | orchestrator | 2025-05-29 00:16:17.372849 | orchestrator | TASK [Add nova_compute_virt_type parameter] ************************************ 2025-05-29 00:16:17.471825 | orchestrator | skipping: [testbed-manager] 2025-05-29 00:16:17.471910 | orchestrator | 2025-05-29 00:16:17.471922 | orchestrator | PLAY [Apply role traefik & netbox] ********************************************* 2025-05-29 00:16:17.471931 | orchestrator | 2025-05-29 00:16:17.471939 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-29 00:16:19.299218 | orchestrator | ok: [testbed-manager] 2025-05-29 00:16:19.299312 | orchestrator | 2025-05-29 00:16:19.299327 | orchestrator | TASK [Apply traefik role] ****************************************************** 2025-05-29 00:16:19.389039 | orchestrator | included: osism.services.traefik for testbed-manager 2025-05-29 00:16:19.389124 | orchestrator | 2025-05-29 00:16:19.389139 | orchestrator | TASK [osism.services.traefik : Include config tasks] *************************** 2025-05-29 00:16:19.455160 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/config.yml for testbed-manager 2025-05-29 00:16:19.455279 | orchestrator | 2025-05-29 00:16:19.455321 | orchestrator | TASK [osism.services.traefik : Create required directories] ******************** 2025-05-29 00:16:20.539705 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik) 2025-05-29 00:16:20.539795 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/certificates) 2025-05-29 00:16:20.539811 | orchestrator | changed: [testbed-manager] => (item=/opt/traefik/configuration) 2025-05-29 00:16:20.539822 | orchestrator | 2025-05-29 00:16:20.539835 | orchestrator | TASK [osism.services.traefik : Copy configuration files] *********************** 2025-05-29 00:16:22.389486 | orchestrator | changed: [testbed-manager] => (item=traefik.yml) 2025-05-29 00:16:22.389597 | orchestrator | changed: [testbed-manager] => (item=traefik.env) 2025-05-29 00:16:22.389612 | orchestrator | changed: [testbed-manager] => (item=certificates.yml) 2025-05-29 00:16:22.389625 | orchestrator | 2025-05-29 00:16:22.389637 | orchestrator | TASK [osism.services.traefik : Copy certificate cert files] ******************** 2025-05-29 00:16:23.028753 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-29 00:16:23.028863 | orchestrator | changed: [testbed-manager] 2025-05-29 00:16:23.028879 | orchestrator | 2025-05-29 00:16:23.028913 | orchestrator | TASK [osism.services.traefik : Copy certificate key files] ********************* 2025-05-29 00:16:23.681414 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-29 00:16:23.681521 | orchestrator | changed: [testbed-manager] 2025-05-29 00:16:23.681538 | orchestrator | 2025-05-29 00:16:23.681551 | orchestrator | TASK [osism.services.traefik : Copy dynamic configuration] ********************* 2025-05-29 00:16:23.740397 | orchestrator | skipping: [testbed-manager] 2025-05-29 00:16:23.740511 | orchestrator | 2025-05-29 00:16:23.740535 | orchestrator | TASK [osism.services.traefik : Remove dynamic configuration] ******************* 2025-05-29 00:16:24.130842 | orchestrator | ok: [testbed-manager] 2025-05-29 00:16:24.130977 | orchestrator | 2025-05-29 00:16:24.130992 | orchestrator | TASK [osism.services.traefik : Include service tasks] ************************** 2025-05-29 00:16:24.198124 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/traefik/tasks/service.yml for testbed-manager 2025-05-29 00:16:24.198241 | orchestrator | 2025-05-29 00:16:24.198256 | orchestrator | TASK [osism.services.traefik : Create traefik external network] **************** 2025-05-29 00:16:25.261282 | orchestrator | changed: [testbed-manager] 2025-05-29 00:16:25.261412 | orchestrator | 2025-05-29 00:16:25.261427 | orchestrator | TASK [osism.services.traefik : Copy docker-compose.yml file] ******************* 2025-05-29 00:16:26.099459 | orchestrator | changed: [testbed-manager] 2025-05-29 00:16:26.099609 | orchestrator | 2025-05-29 00:16:26.099626 | orchestrator | TASK [osism.services.traefik : Manage traefik service] ************************* 2025-05-29 00:16:29.410783 | orchestrator | changed: [testbed-manager] 2025-05-29 00:16:29.410925 | orchestrator | 2025-05-29 00:16:29.410942 | orchestrator | TASK [Apply netbox role] ******************************************************* 2025-05-29 00:16:29.537907 | orchestrator | included: osism.services.netbox for testbed-manager 2025-05-29 00:16:29.538100 | orchestrator | 2025-05-29 00:16:29.538119 | orchestrator | TASK [osism.services.netbox : Include install tasks] *************************** 2025-05-29 00:16:29.610508 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/install-Debian-family.yml for testbed-manager 2025-05-29 00:16:29.610641 | orchestrator | 2025-05-29 00:16:29.610656 | orchestrator | TASK [osism.services.netbox : Install required packages] *********************** 2025-05-29 00:16:32.414599 | orchestrator | ok: [testbed-manager] 2025-05-29 00:16:32.414802 | orchestrator | 2025-05-29 00:16:32.414822 | orchestrator | TASK [osism.services.netbox : Include config tasks] **************************** 2025-05-29 00:16:32.543765 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/config.yml for testbed-manager 2025-05-29 00:16:32.543873 | orchestrator | 2025-05-29 00:16:32.543889 | orchestrator | TASK [osism.services.netbox : Create required directories] ********************* 2025-05-29 00:16:33.705549 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox) 2025-05-29 00:16:33.705656 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox/configuration) 2025-05-29 00:16:33.705717 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox/secrets) 2025-05-29 00:16:33.705760 | orchestrator | 2025-05-29 00:16:33.705773 | orchestrator | TASK [osism.services.netbox : Include postgres config tasks] ******************* 2025-05-29 00:16:33.782830 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/config-postgres.yml for testbed-manager 2025-05-29 00:16:33.782908 | orchestrator | 2025-05-29 00:16:33.782924 | orchestrator | TASK [osism.services.netbox : Copy postgres environment files] ***************** 2025-05-29 00:16:34.470596 | orchestrator | changed: [testbed-manager] => (item=postgres) 2025-05-29 00:16:34.470767 | orchestrator | 2025-05-29 00:16:34.470785 | orchestrator | TASK [osism.services.netbox : Copy postgres configuration file] **************** 2025-05-29 00:16:35.144064 | orchestrator | changed: [testbed-manager] 2025-05-29 00:16:35.144173 | orchestrator | 2025-05-29 00:16:35.144190 | orchestrator | TASK [osism.services.netbox : Copy secret files] ******************************* 2025-05-29 00:16:35.827214 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-29 00:16:35.827318 | orchestrator | changed: [testbed-manager] 2025-05-29 00:16:35.827333 | orchestrator | 2025-05-29 00:16:35.827346 | orchestrator | TASK [osism.services.netbox : Create docker-entrypoint-initdb.d directory] ***** 2025-05-29 00:16:36.235568 | orchestrator | changed: [testbed-manager] 2025-05-29 00:16:36.235741 | orchestrator | 2025-05-29 00:16:36.235758 | orchestrator | TASK [osism.services.netbox : Check if init.sql file exists] ******************* 2025-05-29 00:16:36.596508 | orchestrator | ok: [testbed-manager] 2025-05-29 00:16:36.596617 | orchestrator | 2025-05-29 00:16:36.596631 | orchestrator | TASK [osism.services.netbox : Copy init.sql file] ****************************** 2025-05-29 00:16:36.652989 | orchestrator | skipping: [testbed-manager] 2025-05-29 00:16:36.653077 | orchestrator | 2025-05-29 00:16:36.653089 | orchestrator | TASK [osism.services.netbox : Create init-netbox-database.sh script] *********** 2025-05-29 00:16:37.325929 | orchestrator | changed: [testbed-manager] 2025-05-29 00:16:37.326102 | orchestrator | 2025-05-29 00:16:37.326122 | orchestrator | TASK [osism.services.netbox : Include config tasks] **************************** 2025-05-29 00:16:37.412915 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/config-netbox.yml for testbed-manager 2025-05-29 00:16:37.413008 | orchestrator | 2025-05-29 00:16:37.413021 | orchestrator | TASK [osism.services.netbox : Create directories required by netbox] *********** 2025-05-29 00:16:38.242274 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox/configuration/initializers) 2025-05-29 00:16:38.242379 | orchestrator | changed: [testbed-manager] => (item=/opt/netbox/configuration/startup-scripts) 2025-05-29 00:16:38.242395 | orchestrator | 2025-05-29 00:16:38.242408 | orchestrator | TASK [osism.services.netbox : Copy netbox environment files] ******************* 2025-05-29 00:16:38.939497 | orchestrator | changed: [testbed-manager] => (item=netbox) 2025-05-29 00:16:38.939607 | orchestrator | 2025-05-29 00:16:38.939624 | orchestrator | TASK [osism.services.netbox : Copy netbox configuration file] ****************** 2025-05-29 00:16:39.606504 | orchestrator | changed: [testbed-manager] 2025-05-29 00:16:39.606618 | orchestrator | 2025-05-29 00:16:39.606635 | orchestrator | TASK [osism.services.netbox : Copy nginx unit configuration file (<= 1.26)] **** 2025-05-29 00:16:39.649374 | orchestrator | skipping: [testbed-manager] 2025-05-29 00:16:39.649426 | orchestrator | 2025-05-29 00:16:39.649438 | orchestrator | TASK [osism.services.netbox : Copy nginx unit configuration file (> 1.26)] ***** 2025-05-29 00:16:40.362922 | orchestrator | changed: [testbed-manager] 2025-05-29 00:16:40.363040 | orchestrator | 2025-05-29 00:16:40.363058 | orchestrator | TASK [osism.services.netbox : Copy secret files] ******************************* 2025-05-29 00:16:42.163362 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-29 00:16:42.163446 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-29 00:16:42.163455 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-29 00:16:42.163462 | orchestrator | changed: [testbed-manager] 2025-05-29 00:16:42.163471 | orchestrator | 2025-05-29 00:16:42.163477 | orchestrator | TASK [osism.services.netbox : Deploy initializers for netbox] ****************** 2025-05-29 00:16:48.129613 | orchestrator | changed: [testbed-manager] => (item=custom_fields) 2025-05-29 00:16:48.129835 | orchestrator | changed: [testbed-manager] => (item=device_roles) 2025-05-29 00:16:48.129866 | orchestrator | changed: [testbed-manager] => (item=device_types) 2025-05-29 00:16:48.129887 | orchestrator | changed: [testbed-manager] => (item=groups) 2025-05-29 00:16:48.129945 | orchestrator | changed: [testbed-manager] => (item=manufacturers) 2025-05-29 00:16:48.129962 | orchestrator | changed: [testbed-manager] => (item=object_permissions) 2025-05-29 00:16:48.129973 | orchestrator | changed: [testbed-manager] => (item=prefix_vlan_roles) 2025-05-29 00:16:48.130003 | orchestrator | changed: [testbed-manager] => (item=sites) 2025-05-29 00:16:48.130076 | orchestrator | changed: [testbed-manager] => (item=tags) 2025-05-29 00:16:48.130092 | orchestrator | changed: [testbed-manager] => (item=users) 2025-05-29 00:16:48.130103 | orchestrator | 2025-05-29 00:16:48.130115 | orchestrator | TASK [osism.services.netbox : Deploy startup scripts for netbox] *************** 2025-05-29 00:16:48.767807 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/files/startup-scripts/270_tags.py) 2025-05-29 00:16:48.767918 | orchestrator | 2025-05-29 00:16:48.767934 | orchestrator | TASK [osism.services.netbox : Include service tasks] *************************** 2025-05-29 00:16:48.859016 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/service.yml for testbed-manager 2025-05-29 00:16:48.859181 | orchestrator | 2025-05-29 00:16:48.859199 | orchestrator | TASK [osism.services.netbox : Copy netbox systemd unit file] ******************* 2025-05-29 00:16:49.610750 | orchestrator | changed: [testbed-manager] 2025-05-29 00:16:49.610859 | orchestrator | 2025-05-29 00:16:49.610874 | orchestrator | TASK [osism.services.netbox : Create traefik external network] ***************** 2025-05-29 00:16:50.241021 | orchestrator | ok: [testbed-manager] 2025-05-29 00:16:50.241129 | orchestrator | 2025-05-29 00:16:50.241145 | orchestrator | TASK [osism.services.netbox : Copy docker-compose.yml file] ******************** 2025-05-29 00:16:50.948533 | orchestrator | changed: [testbed-manager] 2025-05-29 00:16:50.948725 | orchestrator | 2025-05-29 00:16:50.948760 | orchestrator | TASK [osism.services.netbox : Pull container images] *************************** 2025-05-29 00:16:53.354115 | orchestrator | ok: [testbed-manager] 2025-05-29 00:16:53.354238 | orchestrator | 2025-05-29 00:16:53.354257 | orchestrator | TASK [osism.services.netbox : Stop and disable old service docker-compose@netbox] *** 2025-05-29 00:16:54.294756 | orchestrator | ok: [testbed-manager] 2025-05-29 00:16:54.294861 | orchestrator | 2025-05-29 00:16:54.294877 | orchestrator | TASK [osism.services.netbox : Manage netbox service] *************************** 2025-05-29 00:17:16.490540 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage netbox service (10 retries left). 2025-05-29 00:17:16.490638 | orchestrator | ok: [testbed-manager] 2025-05-29 00:17:16.490699 | orchestrator | 2025-05-29 00:17:16.490713 | orchestrator | TASK [osism.services.netbox : Register that netbox service was started] ******** 2025-05-29 00:17:16.545852 | orchestrator | skipping: [testbed-manager] 2025-05-29 00:17:16.545930 | orchestrator | 2025-05-29 00:17:16.545943 | orchestrator | TASK [osism.services.netbox : Flush handlers] ********************************** 2025-05-29 00:17:16.545955 | orchestrator | 2025-05-29 00:17:16.545967 | orchestrator | RUNNING HANDLER [osism.services.traefik : Restart traefik service] ************* 2025-05-29 00:17:16.594159 | orchestrator | skipping: [testbed-manager] 2025-05-29 00:17:16.594253 | orchestrator | 2025-05-29 00:17:16.594268 | orchestrator | RUNNING HANDLER [osism.services.netbox : Restart netbox service] *************** 2025-05-29 00:17:16.683364 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/restart-service.yml for testbed-manager 2025-05-29 00:17:16.683447 | orchestrator | 2025-05-29 00:17:16.683461 | orchestrator | RUNNING HANDLER [osism.services.netbox : Get infos on postgres container] ****** 2025-05-29 00:17:17.548859 | orchestrator | ok: [testbed-manager] 2025-05-29 00:17:17.548950 | orchestrator | 2025-05-29 00:17:17.548965 | orchestrator | RUNNING HANDLER [osism.services.netbox : Set postgres container version fact] *** 2025-05-29 00:17:17.624773 | orchestrator | ok: [testbed-manager] 2025-05-29 00:17:17.624860 | orchestrator | 2025-05-29 00:17:17.624874 | orchestrator | RUNNING HANDLER [osism.services.netbox : Print major version of postgres container] *** 2025-05-29 00:17:17.682988 | orchestrator | ok: [testbed-manager] => { 2025-05-29 00:17:17.683077 | orchestrator | "msg": "The major version of the running postgres container is 16" 2025-05-29 00:17:17.683092 | orchestrator | } 2025-05-29 00:17:17.683103 | orchestrator | 2025-05-29 00:17:17.683115 | orchestrator | RUNNING HANDLER [osism.services.netbox : Pull postgres image] ****************** 2025-05-29 00:17:18.330326 | orchestrator | ok: [testbed-manager] 2025-05-29 00:17:18.330442 | orchestrator | 2025-05-29 00:17:18.330459 | orchestrator | RUNNING HANDLER [osism.services.netbox : Get infos on postgres image] ********** 2025-05-29 00:17:19.246271 | orchestrator | ok: [testbed-manager] 2025-05-29 00:17:19.246362 | orchestrator | 2025-05-29 00:17:19.246378 | orchestrator | RUNNING HANDLER [osism.services.netbox : Set postgres image version fact] ****** 2025-05-29 00:17:19.324179 | orchestrator | ok: [testbed-manager] 2025-05-29 00:17:19.324255 | orchestrator | 2025-05-29 00:17:19.324269 | orchestrator | RUNNING HANDLER [osism.services.netbox : Print major version of postgres image] *** 2025-05-29 00:17:19.379796 | orchestrator | ok: [testbed-manager] => { 2025-05-29 00:17:19.379896 | orchestrator | "msg": "The major version of the postgres image is 16" 2025-05-29 00:17:19.379912 | orchestrator | } 2025-05-29 00:17:19.379924 | orchestrator | 2025-05-29 00:17:19.379936 | orchestrator | RUNNING HANDLER [osism.services.netbox : Stop netbox service] ****************** 2025-05-29 00:17:19.428092 | orchestrator | skipping: [testbed-manager] 2025-05-29 00:17:19.428173 | orchestrator | 2025-05-29 00:17:19.428186 | orchestrator | RUNNING HANDLER [osism.services.netbox : Wait for netbox service to stop] ****** 2025-05-29 00:17:19.495251 | orchestrator | skipping: [testbed-manager] 2025-05-29 00:17:19.495314 | orchestrator | 2025-05-29 00:17:19.495324 | orchestrator | RUNNING HANDLER [osism.services.netbox : Get infos on postgres volume] ********* 2025-05-29 00:17:19.564163 | orchestrator | skipping: [testbed-manager] 2025-05-29 00:17:19.564225 | orchestrator | 2025-05-29 00:17:19.564235 | orchestrator | RUNNING HANDLER [osism.services.netbox : Upgrade postgres database] ************ 2025-05-29 00:17:19.630134 | orchestrator | skipping: [testbed-manager] 2025-05-29 00:17:19.630218 | orchestrator | 2025-05-29 00:17:19.630232 | orchestrator | RUNNING HANDLER [osism.services.netbox : Remove netbox-pgautoupgrade container] *** 2025-05-29 00:17:19.685763 | orchestrator | skipping: [testbed-manager] 2025-05-29 00:17:19.685861 | orchestrator | 2025-05-29 00:17:19.685877 | orchestrator | RUNNING HANDLER [osism.services.netbox : Start netbox service] ***************** 2025-05-29 00:17:19.741845 | orchestrator | skipping: [testbed-manager] 2025-05-29 00:17:19.741929 | orchestrator | 2025-05-29 00:17:19.741947 | orchestrator | RUNNING HANDLER [osism.services.netbox : Restart netbox service] *************** 2025-05-29 00:17:21.109778 | orchestrator | changed: [testbed-manager] 2025-05-29 00:17:21.109869 | orchestrator | 2025-05-29 00:17:21.109885 | orchestrator | RUNNING HANDLER [osism.services.netbox : Register that netbox service was started] *** 2025-05-29 00:17:21.189047 | orchestrator | ok: [testbed-manager] 2025-05-29 00:17:21.189128 | orchestrator | 2025-05-29 00:17:21.189141 | orchestrator | RUNNING HANDLER [osism.services.netbox : Wait for netbox service to start] ***** 2025-05-29 00:18:21.241201 | orchestrator | Pausing for 60 seconds 2025-05-29 00:18:21.241291 | orchestrator | changed: [testbed-manager] 2025-05-29 00:18:21.241306 | orchestrator | 2025-05-29 00:18:21.241317 | orchestrator | RUNNING HANDLER [osism.services.netbox : Wait for an healthy netbox service] *** 2025-05-29 00:18:21.308382 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netbox/tasks/wait-for-healthy-service.yml for testbed-manager 2025-05-29 00:18:21.308468 | orchestrator | 2025-05-29 00:18:21.308483 | orchestrator | RUNNING HANDLER [osism.services.netbox : Check that all containers are in a good state] *** 2025-05-29 00:22:32.757994 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (60 retries left). 2025-05-29 00:22:32.758165 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (59 retries left). 2025-05-29 00:22:32.758182 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (58 retries left). 2025-05-29 00:22:32.758194 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (57 retries left). 2025-05-29 00:22:32.758205 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (56 retries left). 2025-05-29 00:22:32.758216 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (55 retries left). 2025-05-29 00:22:32.758227 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (54 retries left). 2025-05-29 00:22:32.758238 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (53 retries left). 2025-05-29 00:22:32.758249 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (52 retries left). 2025-05-29 00:22:32.758286 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (51 retries left). 2025-05-29 00:22:32.758298 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (50 retries left). 2025-05-29 00:22:32.758309 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (49 retries left). 2025-05-29 00:22:32.758319 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (48 retries left). 2025-05-29 00:22:32.758330 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (47 retries left). 2025-05-29 00:22:32.758341 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (46 retries left). 2025-05-29 00:22:32.758354 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (45 retries left). 2025-05-29 00:22:32.758365 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (44 retries left). 2025-05-29 00:22:32.758376 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (43 retries left). 2025-05-29 00:22:32.758387 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (42 retries left). 2025-05-29 00:22:32.758397 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (41 retries left). 2025-05-29 00:22:32.758408 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (40 retries left). 2025-05-29 00:22:32.758418 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (39 retries left). 2025-05-29 00:22:32.758429 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (38 retries left). 2025-05-29 00:22:32.758439 | orchestrator | FAILED - RETRYING: [testbed-manager]: Check that all containers are in a good state (37 retries left). 2025-05-29 00:22:32.758451 | orchestrator | changed: [testbed-manager] 2025-05-29 00:22:32.758463 | orchestrator | 2025-05-29 00:22:32.758475 | orchestrator | PLAY [Deploy manager service] ************************************************** 2025-05-29 00:22:32.758486 | orchestrator | 2025-05-29 00:22:32.758497 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-29 00:22:34.810216 | orchestrator | ok: [testbed-manager] 2025-05-29 00:22:34.810305 | orchestrator | 2025-05-29 00:22:34.810320 | orchestrator | TASK [Apply manager role] ****************************************************** 2025-05-29 00:22:34.942715 | orchestrator | included: osism.services.manager for testbed-manager 2025-05-29 00:22:34.942782 | orchestrator | 2025-05-29 00:22:34.942792 | orchestrator | TASK [osism.services.manager : Include install tasks] ************************** 2025-05-29 00:22:35.005275 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/install-Debian-family.yml for testbed-manager 2025-05-29 00:22:35.005362 | orchestrator | 2025-05-29 00:22:35.005377 | orchestrator | TASK [osism.services.manager : Install required packages] ********************** 2025-05-29 00:22:36.844941 | orchestrator | ok: [testbed-manager] 2025-05-29 00:22:36.845050 | orchestrator | 2025-05-29 00:22:36.845068 | orchestrator | TASK [osism.services.manager : Gather variables for each operating system] ***** 2025-05-29 00:22:36.889692 | orchestrator | ok: [testbed-manager] 2025-05-29 00:22:36.889777 | orchestrator | 2025-05-29 00:22:36.889790 | orchestrator | TASK [osism.services.manager : Include config tasks] *************************** 2025-05-29 00:22:36.979292 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config.yml for testbed-manager 2025-05-29 00:22:36.979365 | orchestrator | 2025-05-29 00:22:36.979378 | orchestrator | TASK [osism.services.manager : Create required directories] ******************** 2025-05-29 00:22:39.788559 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible) 2025-05-29 00:22:39.788705 | orchestrator | changed: [testbed-manager] => (item=/opt/archive) 2025-05-29 00:22:39.788715 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/configuration) 2025-05-29 00:22:39.788722 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/data) 2025-05-29 00:22:39.788743 | orchestrator | ok: [testbed-manager] => (item=/opt/manager) 2025-05-29 00:22:39.788747 | orchestrator | changed: [testbed-manager] => (item=/opt/manager/secrets) 2025-05-29 00:22:39.788751 | orchestrator | changed: [testbed-manager] => (item=/opt/ansible/secrets) 2025-05-29 00:22:39.788755 | orchestrator | changed: [testbed-manager] => (item=/opt/state) 2025-05-29 00:22:39.788759 | orchestrator | 2025-05-29 00:22:39.788764 | orchestrator | TASK [osism.services.manager : Copy all environment file] ********************** 2025-05-29 00:22:40.434119 | orchestrator | changed: [testbed-manager] 2025-05-29 00:22:40.434252 | orchestrator | 2025-05-29 00:22:40.434276 | orchestrator | TASK [osism.services.manager : Copy client environment file] ******************* 2025-05-29 00:22:41.076629 | orchestrator | changed: [testbed-manager] 2025-05-29 00:22:41.076763 | orchestrator | 2025-05-29 00:22:41.076778 | orchestrator | TASK [osism.services.manager : Include ara config tasks] *********************** 2025-05-29 00:22:41.160944 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ara.yml for testbed-manager 2025-05-29 00:22:41.161052 | orchestrator | 2025-05-29 00:22:41.161068 | orchestrator | TASK [osism.services.manager : Copy ARA environment files] ********************* 2025-05-29 00:22:42.397347 | orchestrator | changed: [testbed-manager] => (item=ara) 2025-05-29 00:22:42.397458 | orchestrator | changed: [testbed-manager] => (item=ara-server) 2025-05-29 00:22:42.397473 | orchestrator | 2025-05-29 00:22:42.397486 | orchestrator | TASK [osism.services.manager : Copy MariaDB environment file] ****************** 2025-05-29 00:22:43.025019 | orchestrator | changed: [testbed-manager] 2025-05-29 00:22:43.025133 | orchestrator | 2025-05-29 00:22:43.025151 | orchestrator | TASK [osism.services.manager : Include vault config tasks] ********************* 2025-05-29 00:22:43.084610 | orchestrator | skipping: [testbed-manager] 2025-05-29 00:22:43.084767 | orchestrator | 2025-05-29 00:22:43.084782 | orchestrator | TASK [osism.services.manager : Include ansible config tasks] ******************* 2025-05-29 00:22:43.154434 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-ansible.yml for testbed-manager 2025-05-29 00:22:43.154526 | orchestrator | 2025-05-29 00:22:43.154538 | orchestrator | TASK [osism.services.manager : Copy private ssh keys] ************************** 2025-05-29 00:22:44.570899 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-29 00:22:44.571003 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-29 00:22:44.571018 | orchestrator | changed: [testbed-manager] 2025-05-29 00:22:44.571031 | orchestrator | 2025-05-29 00:22:44.571042 | orchestrator | TASK [osism.services.manager : Copy ansible environment file] ****************** 2025-05-29 00:22:45.205169 | orchestrator | changed: [testbed-manager] 2025-05-29 00:22:45.205309 | orchestrator | 2025-05-29 00:22:45.205327 | orchestrator | TASK [osism.services.manager : Include netbox config tasks] ******************** 2025-05-29 00:22:45.296241 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-netbox.yml for testbed-manager 2025-05-29 00:22:45.296332 | orchestrator | 2025-05-29 00:22:45.296345 | orchestrator | TASK [osism.services.manager : Copy secret files] ****************************** 2025-05-29 00:22:46.502435 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-29 00:22:46.502542 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-29 00:22:46.502557 | orchestrator | changed: [testbed-manager] 2025-05-29 00:22:46.502569 | orchestrator | 2025-05-29 00:22:46.502580 | orchestrator | TASK [osism.services.manager : Copy netbox environment file] ******************* 2025-05-29 00:22:47.126295 | orchestrator | changed: [testbed-manager] 2025-05-29 00:22:47.126397 | orchestrator | 2025-05-29 00:22:47.126413 | orchestrator | TASK [osism.services.manager : Copy inventory-reconciler environment file] ***** 2025-05-29 00:22:47.782222 | orchestrator | changed: [testbed-manager] 2025-05-29 00:22:47.782327 | orchestrator | 2025-05-29 00:22:47.782342 | orchestrator | TASK [osism.services.manager : Include celery config tasks] ******************** 2025-05-29 00:22:47.881413 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-celery.yml for testbed-manager 2025-05-29 00:22:47.881513 | orchestrator | 2025-05-29 00:22:47.881527 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_watches] **************** 2025-05-29 00:22:48.453183 | orchestrator | changed: [testbed-manager] 2025-05-29 00:22:48.454112 | orchestrator | 2025-05-29 00:22:48.454144 | orchestrator | TASK [osism.services.manager : Set fs.inotify.max_user_instances] ************** 2025-05-29 00:22:48.872115 | orchestrator | changed: [testbed-manager] 2025-05-29 00:22:48.872220 | orchestrator | 2025-05-29 00:22:48.872318 | orchestrator | TASK [osism.services.manager : Copy celery environment files] ****************** 2025-05-29 00:22:50.140207 | orchestrator | changed: [testbed-manager] => (item=conductor) 2025-05-29 00:22:50.140320 | orchestrator | changed: [testbed-manager] => (item=openstack) 2025-05-29 00:22:50.140335 | orchestrator | 2025-05-29 00:22:50.140349 | orchestrator | TASK [osism.services.manager : Copy listener environment file] ***************** 2025-05-29 00:22:50.793816 | orchestrator | changed: [testbed-manager] 2025-05-29 00:22:50.793923 | orchestrator | 2025-05-29 00:22:50.793940 | orchestrator | TASK [osism.services.manager : Check for conductor.yml] ************************ 2025-05-29 00:22:51.256919 | orchestrator | ok: [testbed-manager] 2025-05-29 00:22:51.256971 | orchestrator | 2025-05-29 00:22:51.256977 | orchestrator | TASK [osism.services.manager : Copy conductor configuration file] ************** 2025-05-29 00:22:51.616171 | orchestrator | changed: [testbed-manager] 2025-05-29 00:22:51.616264 | orchestrator | 2025-05-29 00:22:51.616279 | orchestrator | TASK [osism.services.manager : Copy empty conductor configuration file] ******** 2025-05-29 00:22:51.665141 | orchestrator | skipping: [testbed-manager] 2025-05-29 00:22:51.665222 | orchestrator | 2025-05-29 00:22:51.665238 | orchestrator | TASK [osism.services.manager : Include wrapper config tasks] ******************* 2025-05-29 00:22:51.755952 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-wrapper.yml for testbed-manager 2025-05-29 00:22:51.756039 | orchestrator | 2025-05-29 00:22:51.756053 | orchestrator | TASK [osism.services.manager : Include wrapper vars file] ********************** 2025-05-29 00:22:51.804252 | orchestrator | ok: [testbed-manager] 2025-05-29 00:22:51.804336 | orchestrator | 2025-05-29 00:22:51.804349 | orchestrator | TASK [osism.services.manager : Copy wrapper scripts] *************************** 2025-05-29 00:22:53.776878 | orchestrator | changed: [testbed-manager] => (item=osism) 2025-05-29 00:22:53.776981 | orchestrator | changed: [testbed-manager] => (item=osism-update-docker) 2025-05-29 00:22:53.776997 | orchestrator | changed: [testbed-manager] => (item=osism-update-manager) 2025-05-29 00:22:53.777009 | orchestrator | 2025-05-29 00:22:53.777021 | orchestrator | TASK [osism.services.manager : Copy cilium wrapper script] ********************* 2025-05-29 00:22:54.479927 | orchestrator | changed: [testbed-manager] 2025-05-29 00:22:54.480019 | orchestrator | 2025-05-29 00:22:54.480035 | orchestrator | TASK [osism.services.manager : Copy hubble wrapper script] ********************* 2025-05-29 00:22:55.176003 | orchestrator | changed: [testbed-manager] 2025-05-29 00:22:55.176101 | orchestrator | 2025-05-29 00:22:55.176117 | orchestrator | TASK [osism.services.manager : Copy flux wrapper script] *********************** 2025-05-29 00:22:55.884690 | orchestrator | changed: [testbed-manager] 2025-05-29 00:22:55.884804 | orchestrator | 2025-05-29 00:22:55.884820 | orchestrator | TASK [osism.services.manager : Include scripts config tasks] ******************* 2025-05-29 00:22:55.951915 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/config-scripts.yml for testbed-manager 2025-05-29 00:22:55.952035 | orchestrator | 2025-05-29 00:22:55.952057 | orchestrator | TASK [osism.services.manager : Include scripts vars file] ********************** 2025-05-29 00:22:55.998582 | orchestrator | ok: [testbed-manager] 2025-05-29 00:22:55.998729 | orchestrator | 2025-05-29 00:22:55.998745 | orchestrator | TASK [osism.services.manager : Copy scripts] *********************************** 2025-05-29 00:22:56.709810 | orchestrator | changed: [testbed-manager] => (item=osism-include) 2025-05-29 00:22:56.709901 | orchestrator | 2025-05-29 00:22:56.709918 | orchestrator | TASK [osism.services.manager : Include service tasks] ************************** 2025-05-29 00:22:56.794377 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/service.yml for testbed-manager 2025-05-29 00:22:56.794456 | orchestrator | 2025-05-29 00:22:56.794470 | orchestrator | TASK [osism.services.manager : Copy manager systemd unit file] ***************** 2025-05-29 00:22:57.484980 | orchestrator | changed: [testbed-manager] 2025-05-29 00:22:57.485056 | orchestrator | 2025-05-29 00:22:57.485068 | orchestrator | TASK [osism.services.manager : Create traefik external network] **************** 2025-05-29 00:22:58.104880 | orchestrator | ok: [testbed-manager] 2025-05-29 00:22:58.105010 | orchestrator | 2025-05-29 00:22:58.105030 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb < 11.0.0] *** 2025-05-29 00:22:58.171950 | orchestrator | skipping: [testbed-manager] 2025-05-29 00:22:58.172033 | orchestrator | 2025-05-29 00:22:58.172048 | orchestrator | TASK [osism.services.manager : Set mariadb healthcheck for mariadb >= 11.0.0] *** 2025-05-29 00:22:58.232470 | orchestrator | ok: [testbed-manager] 2025-05-29 00:22:58.232547 | orchestrator | 2025-05-29 00:22:58.232561 | orchestrator | TASK [osism.services.manager : Copy docker-compose.yml file] ******************* 2025-05-29 00:22:59.054395 | orchestrator | changed: [testbed-manager] 2025-05-29 00:22:59.055157 | orchestrator | 2025-05-29 00:22:59.055189 | orchestrator | TASK [osism.services.manager : Pull container images] ************************** 2025-05-29 00:23:39.104245 | orchestrator | changed: [testbed-manager] 2025-05-29 00:23:39.104363 | orchestrator | 2025-05-29 00:23:39.104380 | orchestrator | TASK [osism.services.manager : Stop and disable old service docker-compose@manager] *** 2025-05-29 00:23:39.779456 | orchestrator | ok: [testbed-manager] 2025-05-29 00:23:39.779565 | orchestrator | 2025-05-29 00:23:39.779582 | orchestrator | TASK [osism.services.manager : Manage manager service] ************************* 2025-05-29 00:23:42.583029 | orchestrator | changed: [testbed-manager] 2025-05-29 00:23:42.583133 | orchestrator | 2025-05-29 00:23:42.583149 | orchestrator | TASK [osism.services.manager : Register that manager service was started] ****** 2025-05-29 00:23:42.636633 | orchestrator | ok: [testbed-manager] 2025-05-29 00:23:42.636746 | orchestrator | 2025-05-29 00:23:42.636759 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-05-29 00:23:42.636771 | orchestrator | 2025-05-29 00:23:42.636782 | orchestrator | RUNNING HANDLER [osism.services.manager : Restart manager service] ************* 2025-05-29 00:23:42.688988 | orchestrator | skipping: [testbed-manager] 2025-05-29 00:23:42.689080 | orchestrator | 2025-05-29 00:23:42.689089 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for manager service to start] *** 2025-05-29 00:24:42.747339 | orchestrator | Pausing for 60 seconds 2025-05-29 00:24:42.747445 | orchestrator | changed: [testbed-manager] 2025-05-29 00:24:42.747461 | orchestrator | 2025-05-29 00:24:42.747474 | orchestrator | RUNNING HANDLER [osism.services.manager : Ensure that all containers are up] *** 2025-05-29 00:24:48.740987 | orchestrator | changed: [testbed-manager] 2025-05-29 00:24:48.741094 | orchestrator | 2025-05-29 00:24:48.741111 | orchestrator | RUNNING HANDLER [osism.services.manager : Wait for an healthy manager service] *** 2025-05-29 00:25:30.291096 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (50 retries left). 2025-05-29 00:25:30.291214 | orchestrator | FAILED - RETRYING: [testbed-manager]: Wait for an healthy manager service (49 retries left). 2025-05-29 00:25:30.291230 | orchestrator | changed: [testbed-manager] 2025-05-29 00:25:30.291243 | orchestrator | 2025-05-29 00:25:30.291255 | orchestrator | RUNNING HANDLER [osism.services.manager : Copy osismclient bash completion script] *** 2025-05-29 00:25:35.967009 | orchestrator | changed: [testbed-manager] 2025-05-29 00:25:35.967128 | orchestrator | 2025-05-29 00:25:35.967144 | orchestrator | TASK [osism.services.manager : Include initialize tasks] *********************** 2025-05-29 00:25:36.055380 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/manager/tasks/initialize.yml for testbed-manager 2025-05-29 00:25:36.055471 | orchestrator | 2025-05-29 00:25:36.055486 | orchestrator | TASK [osism.services.manager : Flush handlers] ********************************* 2025-05-29 00:25:36.055499 | orchestrator | 2025-05-29 00:25:36.055511 | orchestrator | TASK [osism.services.manager : Include vault initialize tasks] ***************** 2025-05-29 00:25:36.106995 | orchestrator | skipping: [testbed-manager] 2025-05-29 00:25:36.107088 | orchestrator | 2025-05-29 00:25:36.107102 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-29 00:25:36.107115 | orchestrator | testbed-manager : ok=111 changed=59 unreachable=0 failed=0 skipped=18 rescued=0 ignored=0 2025-05-29 00:25:36.107127 | orchestrator | 2025-05-29 00:25:36.242517 | orchestrator | + [[ -e /opt/venv/bin/activate ]] 2025-05-29 00:25:36.242614 | orchestrator | + deactivate 2025-05-29 00:25:36.242629 | orchestrator | + '[' -n /usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin ']' 2025-05-29 00:25:36.242692 | orchestrator | + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin 2025-05-29 00:25:36.242735 | orchestrator | + export PATH 2025-05-29 00:25:36.242748 | orchestrator | + unset _OLD_VIRTUAL_PATH 2025-05-29 00:25:36.242759 | orchestrator | + '[' -n '' ']' 2025-05-29 00:25:36.242770 | orchestrator | + hash -r 2025-05-29 00:25:36.242781 | orchestrator | + '[' -n '' ']' 2025-05-29 00:25:36.242791 | orchestrator | + unset VIRTUAL_ENV 2025-05-29 00:25:36.242802 | orchestrator | + unset VIRTUAL_ENV_PROMPT 2025-05-29 00:25:36.242813 | orchestrator | + '[' '!' '' = nondestructive ']' 2025-05-29 00:25:36.242823 | orchestrator | + unset -f deactivate 2025-05-29 00:25:36.242836 | orchestrator | + cp /home/dragon/.ssh/id_rsa.pub /opt/ansible/secrets/id_rsa.operator.pub 2025-05-29 00:25:36.250674 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-05-29 00:25:36.250718 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-05-29 00:25:36.250729 | orchestrator | + local max_attempts=60 2025-05-29 00:25:36.250740 | orchestrator | + local name=ceph-ansible 2025-05-29 00:25:36.250751 | orchestrator | + local attempt_num=1 2025-05-29 00:25:36.251111 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-05-29 00:25:36.295576 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-05-29 00:25:36.295687 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-05-29 00:25:36.295702 | orchestrator | + local max_attempts=60 2025-05-29 00:25:36.295713 | orchestrator | + local name=kolla-ansible 2025-05-29 00:25:36.295724 | orchestrator | + local attempt_num=1 2025-05-29 00:25:36.296329 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-05-29 00:25:36.329266 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-05-29 00:25:36.329332 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-05-29 00:25:36.329344 | orchestrator | + local max_attempts=60 2025-05-29 00:25:36.329355 | orchestrator | + local name=osism-ansible 2025-05-29 00:25:36.329366 | orchestrator | + local attempt_num=1 2025-05-29 00:25:36.329830 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-05-29 00:25:36.361325 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-05-29 00:25:36.361371 | orchestrator | + [[ true == \t\r\u\e ]] 2025-05-29 00:25:36.361384 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-05-29 00:25:37.038308 | orchestrator | + docker compose --project-directory /opt/manager ps 2025-05-29 00:25:37.220078 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2025-05-29 00:25:37.220191 | orchestrator | ceph-ansible registry.osism.tech/osism/ceph-ansible:8.1.0 "/entrypoint.sh osis…" ceph-ansible About a minute ago Up About a minute (healthy) 2025-05-29 00:25:37.220215 | orchestrator | kolla-ansible registry.osism.tech/osism/kolla-ansible:8.1.0 "/entrypoint.sh osis…" kolla-ansible About a minute ago Up About a minute (healthy) 2025-05-29 00:25:37.220248 | orchestrator | manager-api-1 registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" api About a minute ago Up About a minute (healthy) 192.168.16.5:8000->8000/tcp 2025-05-29 00:25:37.220266 | orchestrator | manager-ara-server-1 registry.osism.tech/osism/ara-server:1.7.2 "sh -c '/wait && /ru…" ara-server About a minute ago Up About a minute (healthy) 8000/tcp 2025-05-29 00:25:37.220277 | orchestrator | manager-beat-1 registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" beat About a minute ago Up About a minute (healthy) 2025-05-29 00:25:37.220288 | orchestrator | manager-conductor-1 registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" conductor About a minute ago Up About a minute (healthy) 2025-05-29 00:25:37.220299 | orchestrator | manager-flower-1 registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" flower About a minute ago Up About a minute (healthy) 2025-05-29 00:25:37.220309 | orchestrator | manager-inventory_reconciler-1 registry.osism.tech/osism/inventory-reconciler:8.1.0 "/sbin/tini -- /entr…" inventory_reconciler About a minute ago Up 48 seconds (healthy) 2025-05-29 00:25:37.220340 | orchestrator | manager-listener-1 registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" listener About a minute ago Up About a minute (healthy) 2025-05-29 00:25:37.220351 | orchestrator | manager-mariadb-1 registry.osism.tech/dockerhub/library/mariadb:11.6.2 "docker-entrypoint.s…" mariadb About a minute ago Up About a minute (healthy) 3306/tcp 2025-05-29 00:25:37.220362 | orchestrator | manager-netbox-1 registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" netbox About a minute ago Up About a minute (healthy) 2025-05-29 00:25:37.220372 | orchestrator | manager-openstack-1 registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" openstack About a minute ago Up About a minute (healthy) 2025-05-29 00:25:37.220383 | orchestrator | manager-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.1-alpine "docker-entrypoint.s…" redis About a minute ago Up About a minute (healthy) 6379/tcp 2025-05-29 00:25:37.220394 | orchestrator | manager-watchdog-1 registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- os…" watchdog About a minute ago Up About a minute (healthy) 2025-05-29 00:25:37.220404 | orchestrator | osism-ansible registry.osism.tech/osism/osism-ansible:8.1.0 "/entrypoint.sh osis…" osism-ansible About a minute ago Up About a minute (healthy) 2025-05-29 00:25:37.220415 | orchestrator | osism-kubernetes registry.osism.tech/osism/osism-kubernetes:8.1.0 "/entrypoint.sh osis…" osism-kubernetes About a minute ago Up About a minute (healthy) 2025-05-29 00:25:37.220425 | orchestrator | osismclient registry.osism.tech/osism/osism:0.20241219.2 "/usr/bin/tini -- sl…" osismclient About a minute ago Up About a minute (healthy) 2025-05-29 00:25:37.225696 | orchestrator | + docker compose --project-directory /opt/netbox ps 2025-05-29 00:25:37.352043 | orchestrator | NAME IMAGE COMMAND SERVICE CREATED STATUS PORTS 2025-05-29 00:25:37.352140 | orchestrator | netbox-netbox-1 registry.osism.tech/osism/netbox:v4.1.7 "/usr/bin/tini -- /o…" netbox 8 minutes ago Up 7 minutes (healthy) 2025-05-29 00:25:37.352159 | orchestrator | netbox-netbox-worker-1 registry.osism.tech/osism/netbox:v4.1.7 "/opt/netbox/venv/bi…" netbox-worker 8 minutes ago Up 3 minutes (healthy) 2025-05-29 00:25:37.352180 | orchestrator | netbox-postgres-1 registry.osism.tech/dockerhub/library/postgres:16.6-alpine "docker-entrypoint.s…" postgres 8 minutes ago Up 8 minutes (healthy) 5432/tcp 2025-05-29 00:25:37.352202 | orchestrator | netbox-redis-1 registry.osism.tech/dockerhub/library/redis:7.4.1-alpine "docker-entrypoint.s…" redis 8 minutes ago Up 8 minutes (healthy) 6379/tcp 2025-05-29 00:25:37.361087 | orchestrator | ++ semver 8.1.0 7.0.0 2025-05-29 00:25:37.406348 | orchestrator | + [[ 1 -ge 0 ]] 2025-05-29 00:25:37.406411 | orchestrator | + sed -i s/community.general.yaml/osism.commons.still_alive/ /opt/configuration/environments/ansible.cfg 2025-05-29 00:25:37.411258 | orchestrator | + osism apply resolvconf -l testbed-manager 2025-05-29 00:25:38.949820 | orchestrator | 2025-05-29 00:25:38 | INFO  | Task 358dd6ca-1692-40b3-a668-5a11d9e7b59f (resolvconf) was prepared for execution. 2025-05-29 00:25:38.949924 | orchestrator | 2025-05-29 00:25:38 | INFO  | It takes a moment until task 358dd6ca-1692-40b3-a668-5a11d9e7b59f (resolvconf) has been started and output is visible here. 2025-05-29 00:25:41.959275 | orchestrator | 2025-05-29 00:25:41.959399 | orchestrator | PLAY [Apply role resolvconf] *************************************************** 2025-05-29 00:25:41.960266 | orchestrator | 2025-05-29 00:25:41.960956 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-29 00:25:41.961802 | orchestrator | Thursday 29 May 2025 00:25:41 +0000 (0:00:00.084) 0:00:00.084 ********** 2025-05-29 00:25:45.896299 | orchestrator | ok: [testbed-manager] 2025-05-29 00:25:45.896393 | orchestrator | 2025-05-29 00:25:45.897122 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-05-29 00:25:45.898070 | orchestrator | Thursday 29 May 2025 00:25:45 +0000 (0:00:03.940) 0:00:04.025 ********** 2025-05-29 00:25:45.960031 | orchestrator | skipping: [testbed-manager] 2025-05-29 00:25:45.960755 | orchestrator | 2025-05-29 00:25:45.961414 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-05-29 00:25:45.961444 | orchestrator | Thursday 29 May 2025 00:25:45 +0000 (0:00:00.064) 0:00:04.090 ********** 2025-05-29 00:25:46.040537 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager 2025-05-29 00:25:46.040681 | orchestrator | 2025-05-29 00:25:46.040969 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-05-29 00:25:46.041287 | orchestrator | Thursday 29 May 2025 00:25:46 +0000 (0:00:00.079) 0:00:04.170 ********** 2025-05-29 00:25:46.129387 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager 2025-05-29 00:25:46.130979 | orchestrator | 2025-05-29 00:25:46.131002 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-05-29 00:25:46.132109 | orchestrator | Thursday 29 May 2025 00:25:46 +0000 (0:00:00.088) 0:00:04.259 ********** 2025-05-29 00:25:47.169902 | orchestrator | ok: [testbed-manager] 2025-05-29 00:25:47.169972 | orchestrator | 2025-05-29 00:25:47.170732 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-05-29 00:25:47.171693 | orchestrator | Thursday 29 May 2025 00:25:47 +0000 (0:00:01.038) 0:00:05.298 ********** 2025-05-29 00:25:47.221697 | orchestrator | skipping: [testbed-manager] 2025-05-29 00:25:47.222252 | orchestrator | 2025-05-29 00:25:47.223009 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-05-29 00:25:47.223892 | orchestrator | Thursday 29 May 2025 00:25:47 +0000 (0:00:00.053) 0:00:05.351 ********** 2025-05-29 00:25:47.689285 | orchestrator | ok: [testbed-manager] 2025-05-29 00:25:47.690746 | orchestrator | 2025-05-29 00:25:47.691299 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-05-29 00:25:47.692285 | orchestrator | Thursday 29 May 2025 00:25:47 +0000 (0:00:00.467) 0:00:05.819 ********** 2025-05-29 00:25:47.769428 | orchestrator | skipping: [testbed-manager] 2025-05-29 00:25:47.769880 | orchestrator | 2025-05-29 00:25:47.770703 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-05-29 00:25:47.771100 | orchestrator | Thursday 29 May 2025 00:25:47 +0000 (0:00:00.077) 0:00:05.896 ********** 2025-05-29 00:25:48.289192 | orchestrator | changed: [testbed-manager] 2025-05-29 00:25:48.289298 | orchestrator | 2025-05-29 00:25:48.289313 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-05-29 00:25:48.289326 | orchestrator | Thursday 29 May 2025 00:25:48 +0000 (0:00:00.520) 0:00:06.417 ********** 2025-05-29 00:25:49.334142 | orchestrator | changed: [testbed-manager] 2025-05-29 00:25:49.334872 | orchestrator | 2025-05-29 00:25:49.335173 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-05-29 00:25:49.335924 | orchestrator | Thursday 29 May 2025 00:25:49 +0000 (0:00:01.044) 0:00:07.462 ********** 2025-05-29 00:25:50.276412 | orchestrator | ok: [testbed-manager] 2025-05-29 00:25:50.276514 | orchestrator | 2025-05-29 00:25:50.277066 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-05-29 00:25:50.277979 | orchestrator | Thursday 29 May 2025 00:25:50 +0000 (0:00:00.942) 0:00:08.404 ********** 2025-05-29 00:25:50.359907 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager 2025-05-29 00:25:50.360006 | orchestrator | 2025-05-29 00:25:50.360393 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-05-29 00:25:50.361032 | orchestrator | Thursday 29 May 2025 00:25:50 +0000 (0:00:00.084) 0:00:08.489 ********** 2025-05-29 00:25:51.518896 | orchestrator | changed: [testbed-manager] 2025-05-29 00:25:51.520153 | orchestrator | 2025-05-29 00:25:51.520219 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-29 00:25:51.520614 | orchestrator | 2025-05-29 00:25:51 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-29 00:25:51.520753 | orchestrator | 2025-05-29 00:25:51 | INFO  | Please wait and do not abort execution. 2025-05-29 00:25:51.521484 | orchestrator | testbed-manager : ok=10  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-29 00:25:51.521908 | orchestrator | 2025-05-29 00:25:51.522555 | orchestrator | Thursday 29 May 2025 00:25:51 +0000 (0:00:01.158) 0:00:09.647 ********** 2025-05-29 00:25:51.524772 | orchestrator | =============================================================================== 2025-05-29 00:25:51.525211 | orchestrator | Gathering Facts --------------------------------------------------------- 3.94s 2025-05-29 00:25:51.525750 | orchestrator | osism.commons.resolvconf : Restart systemd-resolved service ------------- 1.16s 2025-05-29 00:25:51.526103 | orchestrator | osism.commons.resolvconf : Copy configuration files --------------------- 1.04s 2025-05-29 00:25:51.526866 | orchestrator | osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf --- 1.04s 2025-05-29 00:25:51.527105 | orchestrator | osism.commons.resolvconf : Start/enable systemd-resolved service -------- 0.94s 2025-05-29 00:25:51.527479 | orchestrator | osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf --- 0.52s 2025-05-29 00:25:51.528153 | orchestrator | osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf ----- 0.47s 2025-05-29 00:25:51.529876 | orchestrator | osism.commons.resolvconf : Include distribution specific installation tasks --- 0.09s 2025-05-29 00:25:51.530825 | orchestrator | osism.commons.resolvconf : Include distribution specific configuration tasks --- 0.08s 2025-05-29 00:25:51.531179 | orchestrator | osism.commons.resolvconf : Include resolvconf tasks --------------------- 0.08s 2025-05-29 00:25:51.533127 | orchestrator | osism.commons.resolvconf : Archive existing file /etc/resolv.conf ------- 0.08s 2025-05-29 00:25:51.533960 | orchestrator | osism.commons.resolvconf : Check minimum and maximum number of name servers --- 0.07s 2025-05-29 00:25:51.534411 | orchestrator | osism.commons.resolvconf : Install package systemd-resolved ------------- 0.05s 2025-05-29 00:25:51.905997 | orchestrator | + osism apply sshconfig 2025-05-29 00:25:53.295799 | orchestrator | 2025-05-29 00:25:53 | INFO  | Task 82621e2c-eb1e-4ee1-b299-b9ee6bd72fd5 (sshconfig) was prepared for execution. 2025-05-29 00:25:53.295912 | orchestrator | 2025-05-29 00:25:53 | INFO  | It takes a moment until task 82621e2c-eb1e-4ee1-b299-b9ee6bd72fd5 (sshconfig) has been started and output is visible here. 2025-05-29 00:25:56.273824 | orchestrator | 2025-05-29 00:25:56.274273 | orchestrator | PLAY [Apply role sshconfig] **************************************************** 2025-05-29 00:25:56.274309 | orchestrator | 2025-05-29 00:25:56.274995 | orchestrator | TASK [osism.commons.sshconfig : Get home directory of operator user] *********** 2025-05-29 00:25:56.275205 | orchestrator | Thursday 29 May 2025 00:25:56 +0000 (0:00:00.105) 0:00:00.105 ********** 2025-05-29 00:25:56.832601 | orchestrator | ok: [testbed-manager] 2025-05-29 00:25:56.832821 | orchestrator | 2025-05-29 00:25:56.832908 | orchestrator | TASK [osism.commons.sshconfig : Ensure .ssh/config.d exist] ******************** 2025-05-29 00:25:56.833673 | orchestrator | Thursday 29 May 2025 00:25:56 +0000 (0:00:00.557) 0:00:00.663 ********** 2025-05-29 00:25:57.320217 | orchestrator | changed: [testbed-manager] 2025-05-29 00:25:57.320438 | orchestrator | 2025-05-29 00:25:57.321026 | orchestrator | TASK [osism.commons.sshconfig : Ensure config for each host exist] ************* 2025-05-29 00:25:57.321813 | orchestrator | Thursday 29 May 2025 00:25:57 +0000 (0:00:00.489) 0:00:01.152 ********** 2025-05-29 00:26:02.851592 | orchestrator | changed: [testbed-manager] => (item=testbed-manager) 2025-05-29 00:26:02.851898 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3) 2025-05-29 00:26:02.852010 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4) 2025-05-29 00:26:02.853089 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5) 2025-05-29 00:26:02.854418 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2025-05-29 00:26:02.854867 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1) 2025-05-29 00:26:02.855278 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2) 2025-05-29 00:26:02.855847 | orchestrator | 2025-05-29 00:26:02.856272 | orchestrator | TASK [osism.commons.sshconfig : Add extra config] ****************************** 2025-05-29 00:26:02.856764 | orchestrator | Thursday 29 May 2025 00:26:02 +0000 (0:00:05.529) 0:00:06.682 ********** 2025-05-29 00:26:02.920315 | orchestrator | skipping: [testbed-manager] 2025-05-29 00:26:02.920392 | orchestrator | 2025-05-29 00:26:02.920499 | orchestrator | TASK [osism.commons.sshconfig : Assemble ssh config] *************************** 2025-05-29 00:26:02.921241 | orchestrator | Thursday 29 May 2025 00:26:02 +0000 (0:00:00.070) 0:00:06.752 ********** 2025-05-29 00:26:03.499693 | orchestrator | changed: [testbed-manager] 2025-05-29 00:26:03.499781 | orchestrator | 2025-05-29 00:26:03.499797 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-29 00:26:03.500472 | orchestrator | 2025-05-29 00:26:03 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-29 00:26:03.500500 | orchestrator | 2025-05-29 00:26:03 | INFO  | Please wait and do not abort execution. 2025-05-29 00:26:03.501446 | orchestrator | testbed-manager : ok=4  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-29 00:26:03.502490 | orchestrator | 2025-05-29 00:26:03.502526 | orchestrator | Thursday 29 May 2025 00:26:03 +0000 (0:00:00.579) 0:00:07.332 ********** 2025-05-29 00:26:03.503273 | orchestrator | =============================================================================== 2025-05-29 00:26:03.504637 | orchestrator | osism.commons.sshconfig : Ensure config for each host exist ------------- 5.53s 2025-05-29 00:26:03.505564 | orchestrator | osism.commons.sshconfig : Assemble ssh config --------------------------- 0.58s 2025-05-29 00:26:03.505962 | orchestrator | osism.commons.sshconfig : Get home directory of operator user ----------- 0.56s 2025-05-29 00:26:03.507289 | orchestrator | osism.commons.sshconfig : Ensure .ssh/config.d exist -------------------- 0.49s 2025-05-29 00:26:03.507908 | orchestrator | osism.commons.sshconfig : Add extra config ------------------------------ 0.07s 2025-05-29 00:26:03.882940 | orchestrator | + osism apply known-hosts 2025-05-29 00:26:05.254472 | orchestrator | 2025-05-29 00:26:05 | INFO  | Task ad18cc1a-f7db-4f1e-b3f3-bea179a3736e (known-hosts) was prepared for execution. 2025-05-29 00:26:05.254558 | orchestrator | 2025-05-29 00:26:05 | INFO  | It takes a moment until task ad18cc1a-f7db-4f1e-b3f3-bea179a3736e (known-hosts) has been started and output is visible here. 2025-05-29 00:26:08.227630 | orchestrator | 2025-05-29 00:26:08.228450 | orchestrator | PLAY [Apply role known_hosts] ************************************************** 2025-05-29 00:26:08.228541 | orchestrator | 2025-05-29 00:26:08.229563 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname] *** 2025-05-29 00:26:08.229602 | orchestrator | Thursday 29 May 2025 00:26:08 +0000 (0:00:00.106) 0:00:00.106 ********** 2025-05-29 00:26:14.275540 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-05-29 00:26:14.276191 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-05-29 00:26:14.279811 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-05-29 00:26:14.281375 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-05-29 00:26:14.281827 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-05-29 00:26:14.282524 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-05-29 00:26:14.282776 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-05-29 00:26:14.282921 | orchestrator | 2025-05-29 00:26:14.283371 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname] *** 2025-05-29 00:26:14.283855 | orchestrator | Thursday 29 May 2025 00:26:14 +0000 (0:00:06.046) 0:00:06.152 ********** 2025-05-29 00:26:14.433577 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-05-29 00:26:14.434288 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-05-29 00:26:14.435373 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-05-29 00:26:14.437181 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-05-29 00:26:14.437210 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-05-29 00:26:14.437959 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-05-29 00:26:14.438396 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-05-29 00:26:14.439135 | orchestrator | 2025-05-29 00:26:14.439764 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-29 00:26:14.440690 | orchestrator | Thursday 29 May 2025 00:26:14 +0000 (0:00:00.162) 0:00:06.315 ********** 2025-05-29 00:26:15.591462 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILabNIf/fhgKlKFVoGfTOwqSA28eoP1bWKQbSZ5aE2+X) 2025-05-29 00:26:15.591979 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLlizD1SI1rC2weI3w2QE9dFxWDCaxWOcHjmiIXWHADyWSkQpHVF/u3sYqUgtwccx5sbYmFJdQkMoOcGDzY8eTrkjZxWr2je73BYqSPT3oSspmjlMcX9RoOkP1Id5XbyfKaVeeJjCwXb8JeXd6NFliVSVy76E7f6hLFDdwlmIrJUTYR5D8Aw1L0G77Ma1J9MZxyPVF7+1rB3TV8N/dk2uat75fMbSTU8lxNllaOglzY4RJd2u/Z2l0sc1qaJQko87JqhOyxeGzAqzWF6T7jFIxRb4QW8waHco+Br0mxELpX7AMu/71A7NL0BAto4WlsHHOm/Xwml4pkMpw0wzcNt8GLvDBNyo0heWfZoaWdkMljKyZX8y3i0W+9TlgzRm72ggn6T1dGLCiOJLd0nsJEmhkQjkvvnDG4YhNP6plUy26MFGSbYUBZe++Pw0aR+sVZTs8YtZEXlmQ4fetMacvz1TP1ZNj+zfZpNXP3+uF7+NtCkILlwwMKIcv8lz/af8HpO8=) 2025-05-29 00:26:15.593392 | orchestrator | changed: [testbed-manager] => (item=testbed-manager ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOZ7TwCRpM/VLIdkp66AcI7nuUCgqV6xiCt6IJEv+G1pLsE3ESNq/jaR3tVnba3isIVAQt7yKdphrTsA5yv0n4s=) 2025-05-29 00:26:15.593779 | orchestrator | 2025-05-29 00:26:15.594416 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-29 00:26:15.595184 | orchestrator | Thursday 29 May 2025 00:26:15 +0000 (0:00:01.152) 0:00:07.467 ********** 2025-05-29 00:26:16.561331 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLTzb1KCzPwV43XnpY4p08kfydpCtOJT7a/TLXSm3Lu9udTK28bxwl1r5fIv6j2hhUvvSIGTPWZua7iQKlnNs1McueMn6RE8k372hdQXanzA9oKdNeh3jGIynep9YuYE/kLNPr5Uyl8sJv4l+xnLu/f0UJAopomQLVoxrhJA8X+oZShIw9+uvXg3f91/OilHWY/MIS8Fm7Y2ykpa/go6cilxWGI1kIrNZ6fWm9Yi4XTB503b6Twx3bwcLkqiyk+pjVeyrE3MvIPw/PkIk/BhHF4l0xvo5oH0tDX/aAMSpRkEtGFwXjNe5qov93FDwu/BT4V6x68FJBz/GqqcIpINCSnPLi7Jfms8Neb+a/4PUsPcxnjorXj9szKEwBtTNYNA0y/DY/Is7AOupxflprUnux+1hjptDcSDcvVvQmDYOlcE0sdW4hDGvGezaKSN/at3QF7UPbH8v5i7537ZOT03qIc7SZ30I7nm88IGSxMsFqMcjv1I/CygxWYHRwugldOik=) 2025-05-29 00:26:16.561633 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKC1N30fiqcf5j00pOo6iQ0AvtzfMsNKA2S5+cq5sMmJGzEsSQvuygHaWDw4AqYawHwHW1qmfTYjz50yOK/P5Oo=) 2025-05-29 00:26:16.562092 | orchestrator | changed: [testbed-manager] => (item=testbed-node-3 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKpA3xf5PWnVwUHRxbFfg+GQGcNOdqq7sKjcduFyhTfv) 2025-05-29 00:26:16.562702 | orchestrator | 2025-05-29 00:26:16.563943 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-29 00:26:16.563969 | orchestrator | Thursday 29 May 2025 00:26:16 +0000 (0:00:00.973) 0:00:08.440 ********** 2025-05-29 00:26:17.602534 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIxDdDAh+ScHWHmtiJsHYsQlpBXVDAv1YZhLIUNOyplQ4VlsP6XBfJn9uKhhVWkAmMZgegy9Keqt9UTUXwiBbYI=) 2025-05-29 00:26:17.602827 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDAegJW8yA6SezZRhAYkfbPsxDwM0EcJykExWXZ6IQ3MytiIXplcYQVu0iqJmUtHPxofnnzwhGIv2GUX3zgxy6RqCg4bjn/PtMoiTp04+9OgwXQfoqDjs7WlcXOdiwjA3HpJquM7avg1tybV+vMW3j9mhdYVjiEzHNzj7LhkDEV8OuWG29spTQUl62Q15Eug0welXRmWfhYMLDLejE145d7qfgNixjW+g6R4uk8X1oTnbEzrg3XVqegZq8BGvC45Gb09aCOKRqO+1ZKsiaIEPIcAfWymGxlKADrujK95FW7qqna2nSF/jdlgwfjLQfzvz/+nmpNV3U9rIbPAEP3A3Yinr4tkFIpDMg4bOpthKwQ1gJaZDxN0DY8VPcEZ6kQV2oO1HZqHj9sqIpXcXt3sUq9Un4qHtDpznzwivd1TDN5u4JQX+aQsG/0ge+jE3NocEHlKXTEY0eSgDbcw0xeR0+zQAc0a1BRl5dLx5j24ZJwLM38iRRWuH8CPeCbKg/klp8=) 2025-05-29 00:26:17.604427 | orchestrator | changed: [testbed-manager] => (item=testbed-node-4 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEUEzhv82k7mPLdXk+tJ5gkDXYoe6asV0EyLsVk+uAza) 2025-05-29 00:26:17.604784 | orchestrator | 2025-05-29 00:26:17.606212 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-29 00:26:17.607061 | orchestrator | Thursday 29 May 2025 00:26:17 +0000 (0:00:01.041) 0:00:09.482 ********** 2025-05-29 00:26:18.617742 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCiKRnxJEagcG5BRCakqv3dkUXKVLEVXxHM7qS61pLJnJol1vjuEbdrnAPDpBVLC9NqmFjS8w+kXrnLEDBvTRBiPBoOe6sJ6zfeIlcocEehAFix8XwDOmwUdRSygnVWk28T45hwuS8o6UvR8pdwDN0hNJFcH6o07UNcxacGL3J+MBje/1udQzOMOug1qzy4WJqqWAEN9kgnqBN4RTYzYmV5eArPHBKagS8DD+bjcXOJV8KYpWhrMlmP+se3KTbogOcxTEFgqWMRIJcCkWO/vGY/V0OPPES3A/JtzobLYe7MZYc9Gx0sN0YWuB31h88F8UdXQ+s/+qifXxJlg/Y7467jniajeqLCaEFhKCYzZvP1xcjU1ajbaCiHOaMFEduyWT+DVCk7Ab/bgHp+YahdjeuSF5qNXgP9og3l81qPesA2+eUgTZ1G0zP+QME+Di3fLDOyuAobr/JCC80/9+re3tgbfSKlGN8JRQJx1m3o3SKtZP9Nho6i1MqwlYzBDR4jkoE=) 2025-05-29 00:26:18.617966 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJxN2h9BWyW85AbPDwpxUYYQold3Mb/Hk50Tp4pcSGJEZE3qWFYOdbslsyaLSLntTyZ8Pw97CU0qES7TGDdzujo=) 2025-05-29 00:26:18.618773 | orchestrator | changed: [testbed-manager] => (item=testbed-node-5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICs+Gupl6NV9a5GWmj7/UEQg1NedQfAUhSGkPmhOQjDe) 2025-05-29 00:26:18.620333 | orchestrator | 2025-05-29 00:26:18.620744 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-29 00:26:18.621282 | orchestrator | Thursday 29 May 2025 00:26:18 +0000 (0:00:01.015) 0:00:10.498 ********** 2025-05-29 00:26:19.640997 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDJknGe8Ilph8rDg1sPspVHzKusJxE4JQyUj1B5XbF1JhRNremBtIIAmEmITbeQyuPJvheFTRQdXXPCjxSsoqYNoWQIqvin4+OLNVTrwjNYy5fUuXNghCg47g3o5uJVqdtpAIDJsHoJTQli7tY3WCG36fk7M6o84JWn1ZkJTIVZl5BLMETfiUEdfkPk1e4wmQ8w5YIentC0/3fuiEssMe9r4xurw1++omDRtG8jygJndAI64NLyVvr5VwXUmxtf8jAcaOdN+X7eBefTfmHXEdmG/tcBcdsBIo2k6uUDjTyjCkiXCsCsKtkn9aSAWqRVoQP7adfDTfHrqUt7JfK+pVZyA2gGtmCAWY8VYrNjNbVGuF17yxLf97uWgnP0vyqmASnrRovATvnC9ua7qxqr9SUcbCNf6QhtR0zWXebgaXJyEkgVabWzpxbJf7MSlUl7NL0lCNLTfx+O6eU+B9bOPCOZuixKNZRKaLdu/a4pEjCpBdv3ECtF7pXkoLx5ZkcVR9M=) 2025-05-29 00:26:19.641095 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJjXkCW+kV/WUxjch59kqrFdq6xlhmZicOJY1iwsgNNlIyU0qOGxoAy4gAf0I6955op5qZJoIseDZTVo83JnLV4=) 2025-05-29 00:26:19.641915 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIX3CPpvmEWhRt3rXTGuic7z0tyynAaOWsPHR2RvDnHi) 2025-05-29 00:26:19.642811 | orchestrator | 2025-05-29 00:26:19.643526 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-29 00:26:19.644151 | orchestrator | Thursday 29 May 2025 00:26:19 +0000 (0:00:01.021) 0:00:11.520 ********** 2025-05-29 00:26:20.696708 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCocBG4j60oaour8AZKQWe8gglG61B7Ac8zTJLZe2fqcV6S9bXdNpv5GfFtmOL23r6iyOix+vo5pdsrJdNfzFVj9WGAh25NJ64Oo7wPUJfXz7prymmSG9A/T10VAo7UXplwXqO6hIQmuzGWGxc6rJphQpi7BG6Um9QO9Bpm6Hy8NzgMG7INcDAR1WeKmyFXhgosp5TWzK5jzwA3Re6FDjrNUKFp9neHXY/HxtNwtlaQbz1xOuY58+POlOSAWMhTG2svz13QGBghPzM686nhfHlCGtjOkdp9/rpQADd8gJqGWHH7DpsucWT4zOrEqPF3A2kDbasCL3Pv8i1VxHmV9YvOTtfDVr4dZcvSXbOlETWnz+EOOXOr+7S6o71rv2gbG9ptgxPGHDAns3HBHnfNFcwNGl0mHMQvAwHbN5nAI+VPdmECA9RhALXOjGEJsi3Jlm3wojW7YfCx4D1sBHRJY3T8UpZvytr5hHZfDu3qFjqrYSe5k8f7GyI58XJ+QtXQij8=) 2025-05-29 00:26:20.697028 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGyfYN7qcRTmrfochRCNyUsOiF2RHVsOSvWpnmjl4oR/A3UN9MbJ91zWcemtGXCxn58PQs5Zd6PdV1mA02aMcIY=) 2025-05-29 00:26:20.697488 | orchestrator | changed: [testbed-manager] => (item=testbed-node-1 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIACis/nDfSDTFryFj3bNiuKGBvyZF8lVz9ekHEmILUf1) 2025-05-29 00:26:20.698223 | orchestrator | 2025-05-29 00:26:20.698775 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-29 00:26:20.699091 | orchestrator | Thursday 29 May 2025 00:26:20 +0000 (0:00:01.055) 0:00:12.575 ********** 2025-05-29 00:26:21.728997 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDA6wgg/1vLxk/VieGchSwt+H+CCYJKrZS7pZbO9n6dRp1qaYAFwvKnYlPuiTKuR12GEydex5etPlWOOHVG/lS7jMKZkP/Cvx1F6spZhEY1gi7PJoxn7duyQd1xe1WEudE9Kvr6jnTxL2RJtO5zh4lzd3G4EWzsfnq45unmsQdV/GAfunpNdYLd0m+D+HXAIw7+ZX2YYoRQ/3UiXxqiMbyylxvtzf0UguFZvlvEmP9RsStlU8mBj/3hNS3BGOQ72dpO66gbht+rJynGKC9PWjmfWgvwmuLE/8gjZh6M3fZy5OMpGF5qba8YQhad1F0YOvB6OlKlN3XVXzyfgdgZQ9f/OcUW+L5Bpj5TlDs1VMfV13o8BPCB/eM0aYx+BiciNLMBFEwgIw0mCMGGNU8bFjieGdhR6CFQEInuJbhn8MqEIHbBLjIqWtf6dw6trMgnK5JuWiipXWLudIQM3qgtbcak+M2Ts9Kf2JH3TJCe4OHBdJXz9gIX9xljdwKnFi2xJQU=) 2025-05-29 00:26:21.729380 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDnsrpm8J6C9qMAmpaiK/W8MJOEl0BHymboTff8gIS0dBiGp4Y+tGiFIwh4gonR01qpW8jcVnYmBeFMA/OXSLJE=) 2025-05-29 00:26:21.729934 | orchestrator | changed: [testbed-manager] => (item=testbed-node-2 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGGUAIC9JORRrVYkc+sDZXs3XcWJpIxKJbd6Oo14Ym9n) 2025-05-29 00:26:21.731149 | orchestrator | 2025-05-29 00:26:21.731458 | orchestrator | TASK [osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host] *** 2025-05-29 00:26:21.731907 | orchestrator | Thursday 29 May 2025 00:26:21 +0000 (0:00:01.034) 0:00:13.610 ********** 2025-05-29 00:26:26.959072 | orchestrator | ok: [testbed-manager] => (item=testbed-manager) 2025-05-29 00:26:26.959189 | orchestrator | ok: [testbed-manager] => (item=testbed-node-3) 2025-05-29 00:26:26.960096 | orchestrator | ok: [testbed-manager] => (item=testbed-node-4) 2025-05-29 00:26:26.961785 | orchestrator | ok: [testbed-manager] => (item=testbed-node-5) 2025-05-29 00:26:26.962379 | orchestrator | ok: [testbed-manager] => (item=testbed-node-0) 2025-05-29 00:26:26.963172 | orchestrator | ok: [testbed-manager] => (item=testbed-node-1) 2025-05-29 00:26:26.963824 | orchestrator | ok: [testbed-manager] => (item=testbed-node-2) 2025-05-29 00:26:26.964704 | orchestrator | 2025-05-29 00:26:26.965513 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host] *** 2025-05-29 00:26:26.965901 | orchestrator | Thursday 29 May 2025 00:26:26 +0000 (0:00:05.227) 0:00:18.837 ********** 2025-05-29 00:26:27.124149 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-manager) 2025-05-29 00:26:27.125275 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-3) 2025-05-29 00:26:27.126710 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-4) 2025-05-29 00:26:27.127478 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-5) 2025-05-29 00:26:27.128433 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-0) 2025-05-29 00:26:27.128905 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-1) 2025-05-29 00:26:27.129678 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/known_hosts/tasks/write-scanned.yml for testbed-manager => (item=Scanned entries of testbed-node-2) 2025-05-29 00:26:27.130490 | orchestrator | 2025-05-29 00:26:27.131015 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-29 00:26:27.131411 | orchestrator | Thursday 29 May 2025 00:26:27 +0000 (0:00:00.166) 0:00:19.004 ********** 2025-05-29 00:26:28.161256 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILabNIf/fhgKlKFVoGfTOwqSA28eoP1bWKQbSZ5aE2+X) 2025-05-29 00:26:28.161478 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLlizD1SI1rC2weI3w2QE9dFxWDCaxWOcHjmiIXWHADyWSkQpHVF/u3sYqUgtwccx5sbYmFJdQkMoOcGDzY8eTrkjZxWr2je73BYqSPT3oSspmjlMcX9RoOkP1Id5XbyfKaVeeJjCwXb8JeXd6NFliVSVy76E7f6hLFDdwlmIrJUTYR5D8Aw1L0G77Ma1J9MZxyPVF7+1rB3TV8N/dk2uat75fMbSTU8lxNllaOglzY4RJd2u/Z2l0sc1qaJQko87JqhOyxeGzAqzWF6T7jFIxRb4QW8waHco+Br0mxELpX7AMu/71A7NL0BAto4WlsHHOm/Xwml4pkMpw0wzcNt8GLvDBNyo0heWfZoaWdkMljKyZX8y3i0W+9TlgzRm72ggn6T1dGLCiOJLd0nsJEmhkQjkvvnDG4YhNP6plUy26MFGSbYUBZe++Pw0aR+sVZTs8YtZEXlmQ4fetMacvz1TP1ZNj+zfZpNXP3+uF7+NtCkILlwwMKIcv8lz/af8HpO8=) 2025-05-29 00:26:28.161820 | orchestrator | changed: [testbed-manager] => (item=192.168.16.5 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBOZ7TwCRpM/VLIdkp66AcI7nuUCgqV6xiCt6IJEv+G1pLsE3ESNq/jaR3tVnba3isIVAQt7yKdphrTsA5yv0n4s=) 2025-05-29 00:26:28.163706 | orchestrator | 2025-05-29 00:26:28.164153 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-29 00:26:28.164512 | orchestrator | Thursday 29 May 2025 00:26:28 +0000 (0:00:01.034) 0:00:20.039 ********** 2025-05-29 00:26:29.198328 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLTzb1KCzPwV43XnpY4p08kfydpCtOJT7a/TLXSm3Lu9udTK28bxwl1r5fIv6j2hhUvvSIGTPWZua7iQKlnNs1McueMn6RE8k372hdQXanzA9oKdNeh3jGIynep9YuYE/kLNPr5Uyl8sJv4l+xnLu/f0UJAopomQLVoxrhJA8X+oZShIw9+uvXg3f91/OilHWY/MIS8Fm7Y2ykpa/go6cilxWGI1kIrNZ6fWm9Yi4XTB503b6Twx3bwcLkqiyk+pjVeyrE3MvIPw/PkIk/BhHF4l0xvo5oH0tDX/aAMSpRkEtGFwXjNe5qov93FDwu/BT4V6x68FJBz/GqqcIpINCSnPLi7Jfms8Neb+a/4PUsPcxnjorXj9szKEwBtTNYNA0y/DY/Is7AOupxflprUnux+1hjptDcSDcvVvQmDYOlcE0sdW4hDGvGezaKSN/at3QF7UPbH8v5i7537ZOT03qIc7SZ30I7nm88IGSxMsFqMcjv1I/CygxWYHRwugldOik=) 2025-05-29 00:26:29.198443 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBKC1N30fiqcf5j00pOo6iQ0AvtzfMsNKA2S5+cq5sMmJGzEsSQvuygHaWDw4AqYawHwHW1qmfTYjz50yOK/P5Oo=) 2025-05-29 00:26:29.199085 | orchestrator | changed: [testbed-manager] => (item=192.168.16.13 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKpA3xf5PWnVwUHRxbFfg+GQGcNOdqq7sKjcduFyhTfv) 2025-05-29 00:26:29.199540 | orchestrator | 2025-05-29 00:26:29.200212 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-29 00:26:29.200917 | orchestrator | Thursday 29 May 2025 00:26:29 +0000 (0:00:01.039) 0:00:21.078 ********** 2025-05-29 00:26:30.217750 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDAegJW8yA6SezZRhAYkfbPsxDwM0EcJykExWXZ6IQ3MytiIXplcYQVu0iqJmUtHPxofnnzwhGIv2GUX3zgxy6RqCg4bjn/PtMoiTp04+9OgwXQfoqDjs7WlcXOdiwjA3HpJquM7avg1tybV+vMW3j9mhdYVjiEzHNzj7LhkDEV8OuWG29spTQUl62Q15Eug0welXRmWfhYMLDLejE145d7qfgNixjW+g6R4uk8X1oTnbEzrg3XVqegZq8BGvC45Gb09aCOKRqO+1ZKsiaIEPIcAfWymGxlKADrujK95FW7qqna2nSF/jdlgwfjLQfzvz/+nmpNV3U9rIbPAEP3A3Yinr4tkFIpDMg4bOpthKwQ1gJaZDxN0DY8VPcEZ6kQV2oO1HZqHj9sqIpXcXt3sUq9Un4qHtDpznzwivd1TDN5u4JQX+aQsG/0ge+jE3NocEHlKXTEY0eSgDbcw0xeR0+zQAc0a1BRl5dLx5j24ZJwLM38iRRWuH8CPeCbKg/klp8=) 2025-05-29 00:26:30.217951 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBIxDdDAh+ScHWHmtiJsHYsQlpBXVDAv1YZhLIUNOyplQ4VlsP6XBfJn9uKhhVWkAmMZgegy9Keqt9UTUXwiBbYI=) 2025-05-29 00:26:30.219175 | orchestrator | changed: [testbed-manager] => (item=192.168.16.14 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEUEzhv82k7mPLdXk+tJ5gkDXYoe6asV0EyLsVk+uAza) 2025-05-29 00:26:30.219496 | orchestrator | 2025-05-29 00:26:30.219823 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-29 00:26:30.220164 | orchestrator | Thursday 29 May 2025 00:26:30 +0000 (0:00:01.020) 0:00:22.098 ********** 2025-05-29 00:26:31.238507 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCiKRnxJEagcG5BRCakqv3dkUXKVLEVXxHM7qS61pLJnJol1vjuEbdrnAPDpBVLC9NqmFjS8w+kXrnLEDBvTRBiPBoOe6sJ6zfeIlcocEehAFix8XwDOmwUdRSygnVWk28T45hwuS8o6UvR8pdwDN0hNJFcH6o07UNcxacGL3J+MBje/1udQzOMOug1qzy4WJqqWAEN9kgnqBN4RTYzYmV5eArPHBKagS8DD+bjcXOJV8KYpWhrMlmP+se3KTbogOcxTEFgqWMRIJcCkWO/vGY/V0OPPES3A/JtzobLYe7MZYc9Gx0sN0YWuB31h88F8UdXQ+s/+qifXxJlg/Y7467jniajeqLCaEFhKCYzZvP1xcjU1ajbaCiHOaMFEduyWT+DVCk7Ab/bgHp+YahdjeuSF5qNXgP9og3l81qPesA2+eUgTZ1G0zP+QME+Di3fLDOyuAobr/JCC80/9+re3tgbfSKlGN8JRQJx1m3o3SKtZP9Nho6i1MqwlYzBDR4jkoE=) 2025-05-29 00:26:31.238754 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICs+Gupl6NV9a5GWmj7/UEQg1NedQfAUhSGkPmhOQjDe) 2025-05-29 00:26:31.240126 | orchestrator | changed: [testbed-manager] => (item=192.168.16.15 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJxN2h9BWyW85AbPDwpxUYYQold3Mb/Hk50Tp4pcSGJEZE3qWFYOdbslsyaLSLntTyZ8Pw97CU0qES7TGDdzujo=) 2025-05-29 00:26:31.240591 | orchestrator | 2025-05-29 00:26:31.241523 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-29 00:26:31.242183 | orchestrator | Thursday 29 May 2025 00:26:31 +0000 (0:00:01.020) 0:00:23.119 ********** 2025-05-29 00:26:32.290523 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBJjXkCW+kV/WUxjch59kqrFdq6xlhmZicOJY1iwsgNNlIyU0qOGxoAy4gAf0I6955op5qZJoIseDZTVo83JnLV4=) 2025-05-29 00:26:32.291284 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDJknGe8Ilph8rDg1sPspVHzKusJxE4JQyUj1B5XbF1JhRNremBtIIAmEmITbeQyuPJvheFTRQdXXPCjxSsoqYNoWQIqvin4+OLNVTrwjNYy5fUuXNghCg47g3o5uJVqdtpAIDJsHoJTQli7tY3WCG36fk7M6o84JWn1ZkJTIVZl5BLMETfiUEdfkPk1e4wmQ8w5YIentC0/3fuiEssMe9r4xurw1++omDRtG8jygJndAI64NLyVvr5VwXUmxtf8jAcaOdN+X7eBefTfmHXEdmG/tcBcdsBIo2k6uUDjTyjCkiXCsCsKtkn9aSAWqRVoQP7adfDTfHrqUt7JfK+pVZyA2gGtmCAWY8VYrNjNbVGuF17yxLf97uWgnP0vyqmASnrRovATvnC9ua7qxqr9SUcbCNf6QhtR0zWXebgaXJyEkgVabWzpxbJf7MSlUl7NL0lCNLTfx+O6eU+B9bOPCOZuixKNZRKaLdu/a4pEjCpBdv3ECtF7pXkoLx5ZkcVR9M=) 2025-05-29 00:26:32.291764 | orchestrator | changed: [testbed-manager] => (item=192.168.16.10 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIIX3CPpvmEWhRt3rXTGuic7z0tyynAaOWsPHR2RvDnHi) 2025-05-29 00:26:32.292231 | orchestrator | 2025-05-29 00:26:32.293274 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-29 00:26:32.294008 | orchestrator | Thursday 29 May 2025 00:26:32 +0000 (0:00:01.050) 0:00:24.170 ********** 2025-05-29 00:26:33.341130 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQCocBG4j60oaour8AZKQWe8gglG61B7Ac8zTJLZe2fqcV6S9bXdNpv5GfFtmOL23r6iyOix+vo5pdsrJdNfzFVj9WGAh25NJ64Oo7wPUJfXz7prymmSG9A/T10VAo7UXplwXqO6hIQmuzGWGxc6rJphQpi7BG6Um9QO9Bpm6Hy8NzgMG7INcDAR1WeKmyFXhgosp5TWzK5jzwA3Re6FDjrNUKFp9neHXY/HxtNwtlaQbz1xOuY58+POlOSAWMhTG2svz13QGBghPzM686nhfHlCGtjOkdp9/rpQADd8gJqGWHH7DpsucWT4zOrEqPF3A2kDbasCL3Pv8i1VxHmV9YvOTtfDVr4dZcvSXbOlETWnz+EOOXOr+7S6o71rv2gbG9ptgxPGHDAns3HBHnfNFcwNGl0mHMQvAwHbN5nAI+VPdmECA9RhALXOjGEJsi3Jlm3wojW7YfCx4D1sBHRJY3T8UpZvytr5hHZfDu3qFjqrYSe5k8f7GyI58XJ+QtXQij8=) 2025-05-29 00:26:33.341793 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBGyfYN7qcRTmrfochRCNyUsOiF2RHVsOSvWpnmjl4oR/A3UN9MbJ91zWcemtGXCxn58PQs5Zd6PdV1mA02aMcIY=) 2025-05-29 00:26:33.342414 | orchestrator | changed: [testbed-manager] => (item=192.168.16.11 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIACis/nDfSDTFryFj3bNiuKGBvyZF8lVz9ekHEmILUf1) 2025-05-29 00:26:33.342943 | orchestrator | 2025-05-29 00:26:33.343683 | orchestrator | TASK [osism.commons.known_hosts : Write scanned known_hosts entries] *********** 2025-05-29 00:26:33.344223 | orchestrator | Thursday 29 May 2025 00:26:33 +0000 (0:00:01.051) 0:00:25.221 ********** 2025-05-29 00:26:34.433906 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBDnsrpm8J6C9qMAmpaiK/W8MJOEl0BHymboTff8gIS0dBiGp4Y+tGiFIwh4gonR01qpW8jcVnYmBeFMA/OXSLJE=) 2025-05-29 00:26:34.435219 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDA6wgg/1vLxk/VieGchSwt+H+CCYJKrZS7pZbO9n6dRp1qaYAFwvKnYlPuiTKuR12GEydex5etPlWOOHVG/lS7jMKZkP/Cvx1F6spZhEY1gi7PJoxn7duyQd1xe1WEudE9Kvr6jnTxL2RJtO5zh4lzd3G4EWzsfnq45unmsQdV/GAfunpNdYLd0m+D+HXAIw7+ZX2YYoRQ/3UiXxqiMbyylxvtzf0UguFZvlvEmP9RsStlU8mBj/3hNS3BGOQ72dpO66gbht+rJynGKC9PWjmfWgvwmuLE/8gjZh6M3fZy5OMpGF5qba8YQhad1F0YOvB6OlKlN3XVXzyfgdgZQ9f/OcUW+L5Bpj5TlDs1VMfV13o8BPCB/eM0aYx+BiciNLMBFEwgIw0mCMGGNU8bFjieGdhR6CFQEInuJbhn8MqEIHbBLjIqWtf6dw6trMgnK5JuWiipXWLudIQM3qgtbcak+M2Ts9Kf2JH3TJCe4OHBdJXz9gIX9xljdwKnFi2xJQU=) 2025-05-29 00:26:34.436196 | orchestrator | changed: [testbed-manager] => (item=192.168.16.12 ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGGUAIC9JORRrVYkc+sDZXs3XcWJpIxKJbd6Oo14Ym9n) 2025-05-29 00:26:34.436442 | orchestrator | 2025-05-29 00:26:34.437113 | orchestrator | TASK [osism.commons.known_hosts : Write static known_hosts entries] ************ 2025-05-29 00:26:34.437751 | orchestrator | Thursday 29 May 2025 00:26:34 +0000 (0:00:01.091) 0:00:26.313 ********** 2025-05-29 00:26:34.598909 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-05-29 00:26:34.599037 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-05-29 00:26:34.599955 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-05-29 00:26:34.600706 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-05-29 00:26:34.601393 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-05-29 00:26:34.602381 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-05-29 00:26:34.603031 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-05-29 00:26:34.603456 | orchestrator | skipping: [testbed-manager] 2025-05-29 00:26:34.603973 | orchestrator | 2025-05-29 00:26:34.604443 | orchestrator | TASK [osism.commons.known_hosts : Write extra known_hosts entries] ************* 2025-05-29 00:26:34.605175 | orchestrator | Thursday 29 May 2025 00:26:34 +0000 (0:00:00.167) 0:00:26.480 ********** 2025-05-29 00:26:34.660785 | orchestrator | skipping: [testbed-manager] 2025-05-29 00:26:34.660964 | orchestrator | 2025-05-29 00:26:34.660981 | orchestrator | TASK [osism.commons.known_hosts : Delete known_hosts entries] ****************** 2025-05-29 00:26:34.661481 | orchestrator | Thursday 29 May 2025 00:26:34 +0000 (0:00:00.060) 0:00:26.541 ********** 2025-05-29 00:26:34.726283 | orchestrator | skipping: [testbed-manager] 2025-05-29 00:26:34.728050 | orchestrator | 2025-05-29 00:26:34.728489 | orchestrator | TASK [osism.commons.known_hosts : Set file permissions] ************************ 2025-05-29 00:26:34.729468 | orchestrator | Thursday 29 May 2025 00:26:34 +0000 (0:00:00.066) 0:00:26.607 ********** 2025-05-29 00:26:35.332549 | orchestrator | changed: [testbed-manager] 2025-05-29 00:26:35.333430 | orchestrator | 2025-05-29 00:26:35.333480 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-29 00:26:35.333757 | orchestrator | 2025-05-29 00:26:35 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-29 00:26:35.333781 | orchestrator | 2025-05-29 00:26:35 | INFO  | Please wait and do not abort execution. 2025-05-29 00:26:35.334293 | orchestrator | testbed-manager : ok=31  changed=15  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-29 00:26:35.335182 | orchestrator | 2025-05-29 00:26:35.335407 | orchestrator | Thursday 29 May 2025 00:26:35 +0000 (0:00:00.605) 0:00:27.213 ********** 2025-05-29 00:26:35.335473 | orchestrator | =============================================================================== 2025-05-29 00:26:35.335928 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with hostname --- 6.05s 2025-05-29 00:26:35.336414 | orchestrator | osism.commons.known_hosts : Run ssh-keyscan for all hosts with ansible_host --- 5.23s 2025-05-29 00:26:35.337164 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.15s 2025-05-29 00:26:35.338122 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.09s 2025-05-29 00:26:35.338536 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.06s 2025-05-29 00:26:35.339147 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.05s 2025-05-29 00:26:35.339965 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.05s 2025-05-29 00:26:35.340438 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2025-05-29 00:26:35.340852 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.04s 2025-05-29 00:26:35.341068 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.03s 2025-05-29 00:26:35.341476 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.03s 2025-05-29 00:26:35.341762 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.02s 2025-05-29 00:26:35.342136 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.02s 2025-05-29 00:26:35.342402 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.02s 2025-05-29 00:26:35.342773 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 1.02s 2025-05-29 00:26:35.342965 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries ----------- 0.97s 2025-05-29 00:26:35.343348 | orchestrator | osism.commons.known_hosts : Set file permissions ------------------------ 0.61s 2025-05-29 00:26:35.343759 | orchestrator | osism.commons.known_hosts : Write static known_hosts entries ------------ 0.17s 2025-05-29 00:26:35.344008 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with ansible_host --- 0.17s 2025-05-29 00:26:35.344127 | orchestrator | osism.commons.known_hosts : Write scanned known_hosts entries for all hosts with hostname --- 0.16s 2025-05-29 00:26:35.749618 | orchestrator | + osism apply squid 2025-05-29 00:26:37.157530 | orchestrator | 2025-05-29 00:26:37 | INFO  | Task ca6f69a6-1ab6-4718-9295-e04df7f6cab2 (squid) was prepared for execution. 2025-05-29 00:26:37.157617 | orchestrator | 2025-05-29 00:26:37 | INFO  | It takes a moment until task ca6f69a6-1ab6-4718-9295-e04df7f6cab2 (squid) has been started and output is visible here. 2025-05-29 00:26:40.089834 | orchestrator | 2025-05-29 00:26:40.090537 | orchestrator | PLAY [Apply role squid] ******************************************************** 2025-05-29 00:26:40.091717 | orchestrator | 2025-05-29 00:26:40.092028 | orchestrator | TASK [osism.services.squid : Include install tasks] **************************** 2025-05-29 00:26:40.092848 | orchestrator | Thursday 29 May 2025 00:26:40 +0000 (0:00:00.106) 0:00:00.106 ********** 2025-05-29 00:26:40.188480 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/squid/tasks/install-Debian-family.yml for testbed-manager 2025-05-29 00:26:40.188993 | orchestrator | 2025-05-29 00:26:40.189614 | orchestrator | TASK [osism.services.squid : Install required packages] ************************ 2025-05-29 00:26:40.190248 | orchestrator | Thursday 29 May 2025 00:26:40 +0000 (0:00:00.102) 0:00:00.209 ********** 2025-05-29 00:26:41.548017 | orchestrator | ok: [testbed-manager] 2025-05-29 00:26:41.548111 | orchestrator | 2025-05-29 00:26:41.548913 | orchestrator | TASK [osism.services.squid : Create required directories] ********************** 2025-05-29 00:26:41.550108 | orchestrator | Thursday 29 May 2025 00:26:41 +0000 (0:00:01.356) 0:00:01.566 ********** 2025-05-29 00:26:42.686119 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration) 2025-05-29 00:26:42.687451 | orchestrator | changed: [testbed-manager] => (item=/opt/squid/configuration/conf.d) 2025-05-29 00:26:42.688335 | orchestrator | ok: [testbed-manager] => (item=/opt/squid) 2025-05-29 00:26:42.689038 | orchestrator | 2025-05-29 00:26:42.689871 | orchestrator | TASK [osism.services.squid : Copy squid configuration files] ******************* 2025-05-29 00:26:42.690704 | orchestrator | Thursday 29 May 2025 00:26:42 +0000 (0:00:01.139) 0:00:02.705 ********** 2025-05-29 00:26:43.806730 | orchestrator | changed: [testbed-manager] => (item=osism.conf) 2025-05-29 00:26:43.806806 | orchestrator | 2025-05-29 00:26:43.807428 | orchestrator | TASK [osism.services.squid : Remove osism_allow_list.conf configuration file] *** 2025-05-29 00:26:43.807843 | orchestrator | Thursday 29 May 2025 00:26:43 +0000 (0:00:01.119) 0:00:03.824 ********** 2025-05-29 00:26:44.177809 | orchestrator | ok: [testbed-manager] 2025-05-29 00:26:44.178111 | orchestrator | 2025-05-29 00:26:44.178916 | orchestrator | TASK [osism.services.squid : Copy docker-compose.yml file] ********************* 2025-05-29 00:26:44.178975 | orchestrator | Thursday 29 May 2025 00:26:44 +0000 (0:00:00.372) 0:00:04.197 ********** 2025-05-29 00:26:45.124671 | orchestrator | changed: [testbed-manager] 2025-05-29 00:26:45.124789 | orchestrator | 2025-05-29 00:26:45.125767 | orchestrator | TASK [osism.services.squid : Manage squid service] ***************************** 2025-05-29 00:26:45.126347 | orchestrator | Thursday 29 May 2025 00:26:45 +0000 (0:00:00.947) 0:00:05.145 ********** 2025-05-29 00:27:16.474272 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage squid service (10 retries left). 2025-05-29 00:27:16.474397 | orchestrator | ok: [testbed-manager] 2025-05-29 00:27:16.474413 | orchestrator | 2025-05-29 00:27:16.474426 | orchestrator | RUNNING HANDLER [osism.services.squid : Restart squid service] ***************** 2025-05-29 00:27:16.474439 | orchestrator | Thursday 29 May 2025 00:27:16 +0000 (0:00:31.345) 0:00:36.490 ********** 2025-05-29 00:27:28.871167 | orchestrator | changed: [testbed-manager] 2025-05-29 00:27:28.871291 | orchestrator | 2025-05-29 00:27:28.871308 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for squid service to start] ******* 2025-05-29 00:27:28.871322 | orchestrator | Thursday 29 May 2025 00:27:28 +0000 (0:00:12.394) 0:00:48.884 ********** 2025-05-29 00:28:28.949823 | orchestrator | Pausing for 60 seconds 2025-05-29 00:28:28.949948 | orchestrator | changed: [testbed-manager] 2025-05-29 00:28:28.950130 | orchestrator | 2025-05-29 00:28:28.950284 | orchestrator | RUNNING HANDLER [osism.services.squid : Register that squid service was restarted] *** 2025-05-29 00:28:28.950816 | orchestrator | Thursday 29 May 2025 00:28:28 +0000 (0:01:00.080) 0:01:48.965 ********** 2025-05-29 00:28:29.007738 | orchestrator | ok: [testbed-manager] 2025-05-29 00:28:29.007858 | orchestrator | 2025-05-29 00:28:29.007980 | orchestrator | RUNNING HANDLER [osism.services.squid : Wait for an healthy squid service] ***** 2025-05-29 00:28:29.009048 | orchestrator | Thursday 29 May 2025 00:28:28 +0000 (0:00:00.062) 0:01:49.027 ********** 2025-05-29 00:28:29.629054 | orchestrator | changed: [testbed-manager] 2025-05-29 00:28:29.630242 | orchestrator | 2025-05-29 00:28:29.630935 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-29 00:28:29.632267 | orchestrator | 2025-05-29 00:28:29 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-29 00:28:29.632301 | orchestrator | 2025-05-29 00:28:29 | INFO  | Please wait and do not abort execution. 2025-05-29 00:28:29.632365 | orchestrator | testbed-manager : ok=11  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-29 00:28:29.633014 | orchestrator | 2025-05-29 00:28:29.633226 | orchestrator | Thursday 29 May 2025 00:28:29 +0000 (0:00:00.621) 0:01:49.648 ********** 2025-05-29 00:28:29.635115 | orchestrator | =============================================================================== 2025-05-29 00:28:29.635142 | orchestrator | osism.services.squid : Wait for squid service to start ----------------- 60.08s 2025-05-29 00:28:29.635153 | orchestrator | osism.services.squid : Manage squid service ---------------------------- 31.35s 2025-05-29 00:28:29.636621 | orchestrator | osism.services.squid : Restart squid service --------------------------- 12.39s 2025-05-29 00:28:29.637349 | orchestrator | osism.services.squid : Install required packages ------------------------ 1.36s 2025-05-29 00:28:29.638204 | orchestrator | osism.services.squid : Create required directories ---------------------- 1.14s 2025-05-29 00:28:29.638879 | orchestrator | osism.services.squid : Copy squid configuration files ------------------- 1.12s 2025-05-29 00:28:29.639346 | orchestrator | osism.services.squid : Copy docker-compose.yml file --------------------- 0.95s 2025-05-29 00:28:29.640088 | orchestrator | osism.services.squid : Wait for an healthy squid service ---------------- 0.62s 2025-05-29 00:28:29.640923 | orchestrator | osism.services.squid : Remove osism_allow_list.conf configuration file --- 0.37s 2025-05-29 00:28:29.641596 | orchestrator | osism.services.squid : Include install tasks ---------------------------- 0.10s 2025-05-29 00:28:29.642156 | orchestrator | osism.services.squid : Register that squid service was restarted -------- 0.06s 2025-05-29 00:28:30.033434 | orchestrator | + [[ 8.1.0 != \l\a\t\e\s\t ]] 2025-05-29 00:28:30.033531 | orchestrator | + sed -i 's#docker_namespace: kolla#docker_namespace: kolla/release#' /opt/configuration/inventory/group_vars/all/kolla.yml 2025-05-29 00:28:30.037030 | orchestrator | ++ semver 8.1.0 9.0.0 2025-05-29 00:28:30.092009 | orchestrator | + [[ -1 -lt 0 ]] 2025-05-29 00:28:30.092067 | orchestrator | + [[ 8.1.0 != \l\a\t\e\s\t ]] 2025-05-29 00:28:30.092081 | orchestrator | + sed -i 's|^# \(network_dispatcher_scripts:\)$|\1|g' /opt/configuration/inventory/group_vars/testbed-nodes.yml 2025-05-29 00:28:30.097295 | orchestrator | + sed -i 's|^# \( - src: /opt/configuration/network/vxlan.sh\)$|\1|g' /opt/configuration/inventory/group_vars/testbed-nodes.yml /opt/configuration/inventory/group_vars/testbed-managers.yml 2025-05-29 00:28:30.101057 | orchestrator | + sed -i 's|^# \( dest: routable.d/vxlan.sh\)$|\1|g' /opt/configuration/inventory/group_vars/testbed-nodes.yml /opt/configuration/inventory/group_vars/testbed-managers.yml 2025-05-29 00:28:30.105086 | orchestrator | + osism apply operator -u ubuntu -l testbed-nodes 2025-05-29 00:28:31.520919 | orchestrator | 2025-05-29 00:28:31 | INFO  | Task 45ccffc0-5b9e-4929-bc3a-61223d440ad5 (operator) was prepared for execution. 2025-05-29 00:28:31.521027 | orchestrator | 2025-05-29 00:28:31 | INFO  | It takes a moment until task 45ccffc0-5b9e-4929-bc3a-61223d440ad5 (operator) has been started and output is visible here. 2025-05-29 00:28:34.512594 | orchestrator | 2025-05-29 00:28:34.512766 | orchestrator | PLAY [Make ssh pipelining working] ********************************************* 2025-05-29 00:28:34.512786 | orchestrator | 2025-05-29 00:28:34.512798 | orchestrator | TASK [Gathering Facts] ********************************************************* 2025-05-29 00:28:34.512810 | orchestrator | Thursday 29 May 2025 00:28:34 +0000 (0:00:00.087) 0:00:00.087 ********** 2025-05-29 00:28:37.926231 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:28:37.931207 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:28:37.931266 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:28:37.931280 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:28:37.931292 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:28:37.931303 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:28:37.931314 | orchestrator | 2025-05-29 00:28:37.931353 | orchestrator | TASK [Do not require tty for all users] **************************************** 2025-05-29 00:28:37.931367 | orchestrator | Thursday 29 May 2025 00:28:37 +0000 (0:00:03.416) 0:00:03.503 ********** 2025-05-29 00:28:38.726451 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:28:38.728593 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:28:38.729073 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:28:38.730558 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:28:38.731448 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:28:38.732119 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:28:38.733929 | orchestrator | 2025-05-29 00:28:38.735441 | orchestrator | PLAY [Apply role operator] ***************************************************** 2025-05-29 00:28:38.740189 | orchestrator | 2025-05-29 00:28:38.744707 | orchestrator | TASK [osism.commons.operator : Gather variables for each operating system] ***** 2025-05-29 00:28:38.744756 | orchestrator | Thursday 29 May 2025 00:28:38 +0000 (0:00:00.800) 0:00:04.304 ********** 2025-05-29 00:28:38.798853 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:28:38.825156 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:28:38.848517 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:28:38.882205 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:28:38.882726 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:28:38.883565 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:28:38.884368 | orchestrator | 2025-05-29 00:28:38.886527 | orchestrator | TASK [osism.commons.operator : Set operator_groups variable to default value] *** 2025-05-29 00:28:38.886553 | orchestrator | Thursday 29 May 2025 00:28:38 +0000 (0:00:00.156) 0:00:04.460 ********** 2025-05-29 00:28:38.935011 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:28:38.972822 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:28:38.992598 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:28:39.039883 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:28:39.039974 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:28:39.040068 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:28:39.043585 | orchestrator | 2025-05-29 00:28:39.043664 | orchestrator | TASK [osism.commons.operator : Create operator group] ************************** 2025-05-29 00:28:39.043691 | orchestrator | Thursday 29 May 2025 00:28:39 +0000 (0:00:00.156) 0:00:04.617 ********** 2025-05-29 00:28:39.753889 | orchestrator | changed: [testbed-node-3] 2025-05-29 00:28:39.754168 | orchestrator | changed: [testbed-node-1] 2025-05-29 00:28:39.755867 | orchestrator | changed: [testbed-node-5] 2025-05-29 00:28:39.756304 | orchestrator | changed: [testbed-node-4] 2025-05-29 00:28:39.759930 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:28:39.760601 | orchestrator | changed: [testbed-node-2] 2025-05-29 00:28:39.761415 | orchestrator | 2025-05-29 00:28:39.762337 | orchestrator | TASK [osism.commons.operator : Create user] ************************************ 2025-05-29 00:28:39.763374 | orchestrator | Thursday 29 May 2025 00:28:39 +0000 (0:00:00.712) 0:00:05.330 ********** 2025-05-29 00:28:40.572124 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:28:40.572231 | orchestrator | changed: [testbed-node-1] 2025-05-29 00:28:40.572311 | orchestrator | changed: [testbed-node-4] 2025-05-29 00:28:40.573046 | orchestrator | changed: [testbed-node-2] 2025-05-29 00:28:40.573392 | orchestrator | changed: [testbed-node-3] 2025-05-29 00:28:40.573843 | orchestrator | changed: [testbed-node-5] 2025-05-29 00:28:40.576273 | orchestrator | 2025-05-29 00:28:40.576765 | orchestrator | TASK [osism.commons.operator : Add user to additional groups] ****************** 2025-05-29 00:28:40.577106 | orchestrator | Thursday 29 May 2025 00:28:40 +0000 (0:00:00.817) 0:00:06.147 ********** 2025-05-29 00:28:41.790231 | orchestrator | changed: [testbed-node-0] => (item=adm) 2025-05-29 00:28:41.790326 | orchestrator | changed: [testbed-node-1] => (item=adm) 2025-05-29 00:28:41.790338 | orchestrator | changed: [testbed-node-3] => (item=adm) 2025-05-29 00:28:41.790347 | orchestrator | changed: [testbed-node-4] => (item=adm) 2025-05-29 00:28:41.790356 | orchestrator | changed: [testbed-node-2] => (item=adm) 2025-05-29 00:28:41.790364 | orchestrator | changed: [testbed-node-5] => (item=adm) 2025-05-29 00:28:41.790372 | orchestrator | changed: [testbed-node-0] => (item=sudo) 2025-05-29 00:28:41.790380 | orchestrator | changed: [testbed-node-1] => (item=sudo) 2025-05-29 00:28:41.790441 | orchestrator | changed: [testbed-node-3] => (item=sudo) 2025-05-29 00:28:41.790524 | orchestrator | changed: [testbed-node-4] => (item=sudo) 2025-05-29 00:28:41.790536 | orchestrator | changed: [testbed-node-2] => (item=sudo) 2025-05-29 00:28:41.790544 | orchestrator | changed: [testbed-node-5] => (item=sudo) 2025-05-29 00:28:41.790745 | orchestrator | 2025-05-29 00:28:41.790966 | orchestrator | TASK [osism.commons.operator : Copy user sudoers file] ************************* 2025-05-29 00:28:41.792018 | orchestrator | Thursday 29 May 2025 00:28:41 +0000 (0:00:01.212) 0:00:07.360 ********** 2025-05-29 00:28:43.042878 | orchestrator | changed: [testbed-node-2] 2025-05-29 00:28:43.046803 | orchestrator | changed: [testbed-node-4] 2025-05-29 00:28:43.049548 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:28:43.049583 | orchestrator | changed: [testbed-node-1] 2025-05-29 00:28:43.049595 | orchestrator | changed: [testbed-node-5] 2025-05-29 00:28:43.049608 | orchestrator | changed: [testbed-node-3] 2025-05-29 00:28:43.049881 | orchestrator | 2025-05-29 00:28:43.050808 | orchestrator | TASK [osism.commons.operator : Set language variables in .bashrc configuration file] *** 2025-05-29 00:28:43.051360 | orchestrator | Thursday 29 May 2025 00:28:43 +0000 (0:00:01.259) 0:00:08.619 ********** 2025-05-29 00:28:44.275882 | orchestrator | [WARNING]: Module remote_tmp /root/.ansible/tmp did not exist and was created 2025-05-29 00:28:44.277341 | orchestrator | with a mode of 0700, this may cause issues when running as another user. To 2025-05-29 00:28:44.278891 | orchestrator | avoid this, create the remote_tmp dir with the correct permissions manually 2025-05-29 00:28:44.316663 | orchestrator | changed: [testbed-node-1] => (item=export LANGUAGE=C.UTF-8) 2025-05-29 00:28:44.317471 | orchestrator | changed: [testbed-node-3] => (item=export LANGUAGE=C.UTF-8) 2025-05-29 00:28:44.317806 | orchestrator | changed: [testbed-node-0] => (item=export LANGUAGE=C.UTF-8) 2025-05-29 00:28:44.318700 | orchestrator | changed: [testbed-node-2] => (item=export LANGUAGE=C.UTF-8) 2025-05-29 00:28:44.318886 | orchestrator | changed: [testbed-node-4] => (item=export LANGUAGE=C.UTF-8) 2025-05-29 00:28:44.319624 | orchestrator | changed: [testbed-node-5] => (item=export LANGUAGE=C.UTF-8) 2025-05-29 00:28:44.321296 | orchestrator | changed: [testbed-node-0] => (item=export LANG=C.UTF-8) 2025-05-29 00:28:44.321745 | orchestrator | changed: [testbed-node-3] => (item=export LANG=C.UTF-8) 2025-05-29 00:28:44.322951 | orchestrator | changed: [testbed-node-4] => (item=export LANG=C.UTF-8) 2025-05-29 00:28:44.328897 | orchestrator | changed: [testbed-node-1] => (item=export LANG=C.UTF-8) 2025-05-29 00:28:44.329443 | orchestrator | changed: [testbed-node-2] => (item=export LANG=C.UTF-8) 2025-05-29 00:28:44.330166 | orchestrator | changed: [testbed-node-5] => (item=export LANG=C.UTF-8) 2025-05-29 00:28:44.333842 | orchestrator | changed: [testbed-node-4] => (item=export LC_ALL=C.UTF-8) 2025-05-29 00:28:44.333873 | orchestrator | changed: [testbed-node-0] => (item=export LC_ALL=C.UTF-8) 2025-05-29 00:28:44.334662 | orchestrator | changed: [testbed-node-3] => (item=export LC_ALL=C.UTF-8) 2025-05-29 00:28:44.335347 | orchestrator | changed: [testbed-node-1] => (item=export LC_ALL=C.UTF-8) 2025-05-29 00:28:44.336010 | orchestrator | changed: [testbed-node-2] => (item=export LC_ALL=C.UTF-8) 2025-05-29 00:28:44.337852 | orchestrator | changed: [testbed-node-5] => (item=export LC_ALL=C.UTF-8) 2025-05-29 00:28:44.339468 | orchestrator | 2025-05-29 00:28:44.340458 | orchestrator | TASK [osism.commons.operator : Create .ssh directory] ************************** 2025-05-29 00:28:44.340953 | orchestrator | Thursday 29 May 2025 00:28:44 +0000 (0:00:01.275) 0:00:09.895 ********** 2025-05-29 00:28:44.917833 | orchestrator | changed: [testbed-node-3] 2025-05-29 00:28:44.923196 | orchestrator | changed: [testbed-node-1] 2025-05-29 00:28:44.924383 | orchestrator | changed: [testbed-node-4] 2025-05-29 00:28:44.927971 | orchestrator | changed: [testbed-node-5] 2025-05-29 00:28:44.928366 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:28:44.928761 | orchestrator | changed: [testbed-node-2] 2025-05-29 00:28:44.929156 | orchestrator | 2025-05-29 00:28:44.929595 | orchestrator | TASK [osism.commons.operator : Check number of SSH authorized keys] ************ 2025-05-29 00:28:44.930087 | orchestrator | Thursday 29 May 2025 00:28:44 +0000 (0:00:00.599) 0:00:10.494 ********** 2025-05-29 00:28:44.981568 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:28:45.004867 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:28:45.023464 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:28:45.080819 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:28:45.081907 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:28:45.083441 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:28:45.085502 | orchestrator | 2025-05-29 00:28:45.085952 | orchestrator | TASK [osism.commons.operator : Set ssh authorized keys] ************************ 2025-05-29 00:28:45.086394 | orchestrator | Thursday 29 May 2025 00:28:45 +0000 (0:00:00.164) 0:00:10.658 ********** 2025-05-29 00:28:45.917722 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-05-29 00:28:45.917985 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:28:45.921833 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-05-29 00:28:45.923418 | orchestrator | changed: [testbed-node-4] 2025-05-29 00:28:45.923444 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-05-29 00:28:45.923457 | orchestrator | changed: [testbed-node-3] 2025-05-29 00:28:45.923469 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-05-29 00:28:45.923481 | orchestrator | changed: [testbed-node-5] 2025-05-29 00:28:45.923938 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-05-29 00:28:45.927843 | orchestrator | changed: [testbed-node-1] 2025-05-29 00:28:45.927921 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-05-29 00:28:45.927937 | orchestrator | changed: [testbed-node-2] 2025-05-29 00:28:45.927949 | orchestrator | 2025-05-29 00:28:45.927961 | orchestrator | TASK [osism.commons.operator : Delete ssh authorized keys] ********************* 2025-05-29 00:28:45.927973 | orchestrator | Thursday 29 May 2025 00:28:45 +0000 (0:00:00.833) 0:00:11.492 ********** 2025-05-29 00:28:45.973555 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:28:45.993106 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:28:46.024565 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:28:46.062764 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:28:46.102858 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:28:46.103006 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:28:46.103254 | orchestrator | 2025-05-29 00:28:46.103556 | orchestrator | TASK [osism.commons.operator : Set authorized GitHub accounts] ***************** 2025-05-29 00:28:46.105481 | orchestrator | Thursday 29 May 2025 00:28:46 +0000 (0:00:00.183) 0:00:11.676 ********** 2025-05-29 00:28:46.145703 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:28:46.180899 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:28:46.197242 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:28:46.216575 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:28:46.253557 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:28:46.253846 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:28:46.254424 | orchestrator | 2025-05-29 00:28:46.254854 | orchestrator | TASK [osism.commons.operator : Delete authorized GitHub accounts] ************** 2025-05-29 00:28:46.255390 | orchestrator | Thursday 29 May 2025 00:28:46 +0000 (0:00:00.154) 0:00:11.830 ********** 2025-05-29 00:28:46.316074 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:28:46.374179 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:28:46.395053 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:28:46.426310 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:28:46.429760 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:28:46.429799 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:28:46.429812 | orchestrator | 2025-05-29 00:28:46.430466 | orchestrator | TASK [osism.commons.operator : Set password] *********************************** 2025-05-29 00:28:46.431576 | orchestrator | Thursday 29 May 2025 00:28:46 +0000 (0:00:00.172) 0:00:12.003 ********** 2025-05-29 00:28:47.151007 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:28:47.151172 | orchestrator | changed: [testbed-node-1] 2025-05-29 00:28:47.152347 | orchestrator | changed: [testbed-node-2] 2025-05-29 00:28:47.153595 | orchestrator | changed: [testbed-node-3] 2025-05-29 00:28:47.154614 | orchestrator | changed: [testbed-node-4] 2025-05-29 00:28:47.155424 | orchestrator | changed: [testbed-node-5] 2025-05-29 00:28:47.156132 | orchestrator | 2025-05-29 00:28:47.156698 | orchestrator | TASK [osism.commons.operator : Unset & lock password] ************************** 2025-05-29 00:28:47.157731 | orchestrator | Thursday 29 May 2025 00:28:47 +0000 (0:00:00.723) 0:00:12.727 ********** 2025-05-29 00:28:47.243391 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:28:47.278555 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:28:47.376465 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:28:47.376703 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:28:47.377439 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:28:47.378209 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:28:47.378523 | orchestrator | 2025-05-29 00:28:47.381878 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-29 00:28:47.381973 | orchestrator | 2025-05-29 00:28:47 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-29 00:28:47.381995 | orchestrator | 2025-05-29 00:28:47 | INFO  | Please wait and do not abort execution. 2025-05-29 00:28:47.382552 | orchestrator | testbed-node-0 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-29 00:28:47.383333 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-29 00:28:47.384391 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-29 00:28:47.385548 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-29 00:28:47.386291 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-29 00:28:47.387163 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-29 00:28:47.387618 | orchestrator | 2025-05-29 00:28:47.388552 | orchestrator | Thursday 29 May 2025 00:28:47 +0000 (0:00:00.227) 0:00:12.954 ********** 2025-05-29 00:28:47.389066 | orchestrator | =============================================================================== 2025-05-29 00:28:47.390377 | orchestrator | Gathering Facts --------------------------------------------------------- 3.42s 2025-05-29 00:28:47.390857 | orchestrator | osism.commons.operator : Set language variables in .bashrc configuration file --- 1.28s 2025-05-29 00:28:47.393204 | orchestrator | osism.commons.operator : Copy user sudoers file ------------------------- 1.26s 2025-05-29 00:28:47.393826 | orchestrator | osism.commons.operator : Add user to additional groups ------------------ 1.21s 2025-05-29 00:28:47.394850 | orchestrator | osism.commons.operator : Set ssh authorized keys ------------------------ 0.83s 2025-05-29 00:28:47.395283 | orchestrator | osism.commons.operator : Create user ------------------------------------ 0.82s 2025-05-29 00:28:47.395964 | orchestrator | Do not require tty for all users ---------------------------------------- 0.80s 2025-05-29 00:28:47.396255 | orchestrator | osism.commons.operator : Set password ----------------------------------- 0.72s 2025-05-29 00:28:47.397883 | orchestrator | osism.commons.operator : Create operator group -------------------------- 0.71s 2025-05-29 00:28:47.398220 | orchestrator | osism.commons.operator : Create .ssh directory -------------------------- 0.60s 2025-05-29 00:28:47.398732 | orchestrator | osism.commons.operator : Unset & lock password -------------------------- 0.23s 2025-05-29 00:28:47.399391 | orchestrator | osism.commons.operator : Delete ssh authorized keys --------------------- 0.18s 2025-05-29 00:28:47.399628 | orchestrator | osism.commons.operator : Delete authorized GitHub accounts -------------- 0.17s 2025-05-29 00:28:47.400120 | orchestrator | osism.commons.operator : Check number of SSH authorized keys ------------ 0.16s 2025-05-29 00:28:47.401088 | orchestrator | osism.commons.operator : Set operator_groups variable to default value --- 0.16s 2025-05-29 00:28:47.401472 | orchestrator | osism.commons.operator : Gather variables for each operating system ----- 0.16s 2025-05-29 00:28:47.402503 | orchestrator | osism.commons.operator : Set authorized GitHub accounts ----------------- 0.15s 2025-05-29 00:28:47.782345 | orchestrator | + osism apply --environment custom facts 2025-05-29 00:28:49.171338 | orchestrator | 2025-05-29 00:28:49 | INFO  | Trying to run play facts in environment custom 2025-05-29 00:28:49.221197 | orchestrator | 2025-05-29 00:28:49 | INFO  | Task 48bec760-36f3-4789-af24-8faa27206aef (facts) was prepared for execution. 2025-05-29 00:28:49.221297 | orchestrator | 2025-05-29 00:28:49 | INFO  | It takes a moment until task 48bec760-36f3-4789-af24-8faa27206aef (facts) has been started and output is visible here. 2025-05-29 00:28:52.186340 | orchestrator | 2025-05-29 00:28:52.186974 | orchestrator | PLAY [Copy custom network devices fact] **************************************** 2025-05-29 00:28:52.189068 | orchestrator | 2025-05-29 00:28:52.192760 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-05-29 00:28:52.196510 | orchestrator | Thursday 29 May 2025 00:28:52 +0000 (0:00:00.081) 0:00:00.081 ********** 2025-05-29 00:28:53.442484 | orchestrator | ok: [testbed-manager] 2025-05-29 00:28:54.521440 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:28:54.521532 | orchestrator | changed: [testbed-node-4] 2025-05-29 00:28:54.522764 | orchestrator | changed: [testbed-node-5] 2025-05-29 00:28:54.522806 | orchestrator | changed: [testbed-node-3] 2025-05-29 00:28:54.526401 | orchestrator | changed: [testbed-node-1] 2025-05-29 00:28:54.526692 | orchestrator | changed: [testbed-node-2] 2025-05-29 00:28:54.527190 | orchestrator | 2025-05-29 00:28:54.527578 | orchestrator | TASK [Copy fact file] ********************************************************** 2025-05-29 00:28:54.527884 | orchestrator | Thursday 29 May 2025 00:28:54 +0000 (0:00:02.335) 0:00:02.417 ********** 2025-05-29 00:28:55.638601 | orchestrator | ok: [testbed-manager] 2025-05-29 00:28:56.490736 | orchestrator | changed: [testbed-node-4] 2025-05-29 00:28:56.490820 | orchestrator | changed: [testbed-node-3] 2025-05-29 00:28:56.491861 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:28:56.493265 | orchestrator | changed: [testbed-node-5] 2025-05-29 00:28:56.496409 | orchestrator | changed: [testbed-node-1] 2025-05-29 00:28:56.496774 | orchestrator | changed: [testbed-node-2] 2025-05-29 00:28:56.497390 | orchestrator | 2025-05-29 00:28:56.498627 | orchestrator | PLAY [Copy custom ceph devices facts] ****************************************** 2025-05-29 00:28:56.498727 | orchestrator | 2025-05-29 00:28:56.499047 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-05-29 00:28:56.499322 | orchestrator | Thursday 29 May 2025 00:28:56 +0000 (0:00:01.970) 0:00:04.387 ********** 2025-05-29 00:28:56.615197 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:28:56.615389 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:28:56.616621 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:28:56.617070 | orchestrator | 2025-05-29 00:28:56.618808 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-05-29 00:28:56.618919 | orchestrator | Thursday 29 May 2025 00:28:56 +0000 (0:00:00.125) 0:00:04.513 ********** 2025-05-29 00:28:56.735088 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:28:56.735177 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:28:56.735185 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:28:56.735251 | orchestrator | 2025-05-29 00:28:56.735302 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-05-29 00:28:56.735558 | orchestrator | Thursday 29 May 2025 00:28:56 +0000 (0:00:00.119) 0:00:04.632 ********** 2025-05-29 00:28:56.863122 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:28:56.863219 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:28:56.863310 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:28:56.863478 | orchestrator | 2025-05-29 00:28:56.863688 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-05-29 00:28:56.863806 | orchestrator | Thursday 29 May 2025 00:28:56 +0000 (0:00:00.128) 0:00:04.761 ********** 2025-05-29 00:28:56.991572 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-29 00:28:56.991920 | orchestrator | 2025-05-29 00:28:56.992533 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-05-29 00:28:56.992752 | orchestrator | Thursday 29 May 2025 00:28:56 +0000 (0:00:00.127) 0:00:04.889 ********** 2025-05-29 00:28:57.431167 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:28:57.434735 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:28:57.434820 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:28:57.434833 | orchestrator | 2025-05-29 00:28:57.434845 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-05-29 00:28:57.434857 | orchestrator | Thursday 29 May 2025 00:28:57 +0000 (0:00:00.438) 0:00:05.327 ********** 2025-05-29 00:28:57.541089 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:28:57.541185 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:28:57.542074 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:28:57.542962 | orchestrator | 2025-05-29 00:28:57.543764 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-05-29 00:28:57.544908 | orchestrator | Thursday 29 May 2025 00:28:57 +0000 (0:00:00.109) 0:00:05.437 ********** 2025-05-29 00:28:58.519206 | orchestrator | changed: [testbed-node-3] 2025-05-29 00:28:58.519326 | orchestrator | changed: [testbed-node-4] 2025-05-29 00:28:58.519341 | orchestrator | changed: [testbed-node-5] 2025-05-29 00:28:58.519353 | orchestrator | 2025-05-29 00:28:58.519366 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-05-29 00:28:58.519378 | orchestrator | Thursday 29 May 2025 00:28:58 +0000 (0:00:00.976) 0:00:06.413 ********** 2025-05-29 00:28:58.994322 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:28:58.994505 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:28:58.995118 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:28:58.995768 | orchestrator | 2025-05-29 00:28:58.996484 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-05-29 00:28:58.997125 | orchestrator | Thursday 29 May 2025 00:28:58 +0000 (0:00:00.477) 0:00:06.890 ********** 2025-05-29 00:29:00.012399 | orchestrator | changed: [testbed-node-4] 2025-05-29 00:29:00.012550 | orchestrator | changed: [testbed-node-3] 2025-05-29 00:29:00.012694 | orchestrator | changed: [testbed-node-5] 2025-05-29 00:29:00.013362 | orchestrator | 2025-05-29 00:29:00.013751 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-05-29 00:29:00.015974 | orchestrator | Thursday 29 May 2025 00:29:00 +0000 (0:00:01.017) 0:00:07.908 ********** 2025-05-29 00:29:13.806501 | orchestrator | changed: [testbed-node-4] 2025-05-29 00:29:13.806613 | orchestrator | changed: [testbed-node-5] 2025-05-29 00:29:13.806801 | orchestrator | changed: [testbed-node-3] 2025-05-29 00:29:13.808393 | orchestrator | 2025-05-29 00:29:13.809169 | orchestrator | TASK [Install required packages (RedHat)] ************************************** 2025-05-29 00:29:13.810192 | orchestrator | Thursday 29 May 2025 00:29:13 +0000 (0:00:13.789) 0:00:21.698 ********** 2025-05-29 00:29:13.870292 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:29:13.917493 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:29:13.919532 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:29:13.919582 | orchestrator | 2025-05-29 00:29:13.919596 | orchestrator | TASK [Install required packages (Debian)] ************************************** 2025-05-29 00:29:13.923130 | orchestrator | Thursday 29 May 2025 00:29:13 +0000 (0:00:00.117) 0:00:21.815 ********** 2025-05-29 00:29:21.380817 | orchestrator | changed: [testbed-node-4] 2025-05-29 00:29:21.381220 | orchestrator | changed: [testbed-node-5] 2025-05-29 00:29:21.382102 | orchestrator | changed: [testbed-node-3] 2025-05-29 00:29:21.383010 | orchestrator | 2025-05-29 00:29:21.384863 | orchestrator | TASK [Create custom facts directory] ******************************************* 2025-05-29 00:29:21.385921 | orchestrator | Thursday 29 May 2025 00:29:21 +0000 (0:00:07.460) 0:00:29.276 ********** 2025-05-29 00:29:21.803539 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:29:21.803828 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:29:21.805522 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:29:21.806739 | orchestrator | 2025-05-29 00:29:21.807101 | orchestrator | TASK [Copy fact files] ********************************************************* 2025-05-29 00:29:21.807940 | orchestrator | Thursday 29 May 2025 00:29:21 +0000 (0:00:00.424) 0:00:29.701 ********** 2025-05-29 00:29:25.331822 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices) 2025-05-29 00:29:25.332078 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices) 2025-05-29 00:29:25.333592 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices) 2025-05-29 00:29:25.335418 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_devices_all) 2025-05-29 00:29:25.336322 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_devices_all) 2025-05-29 00:29:25.337005 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_devices_all) 2025-05-29 00:29:25.338075 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices) 2025-05-29 00:29:25.338335 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices) 2025-05-29 00:29:25.338964 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices) 2025-05-29 00:29:25.339533 | orchestrator | changed: [testbed-node-4] => (item=testbed_ceph_osd_devices_all) 2025-05-29 00:29:25.339975 | orchestrator | changed: [testbed-node-3] => (item=testbed_ceph_osd_devices_all) 2025-05-29 00:29:25.340478 | orchestrator | changed: [testbed-node-5] => (item=testbed_ceph_osd_devices_all) 2025-05-29 00:29:25.340986 | orchestrator | 2025-05-29 00:29:25.341456 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-05-29 00:29:25.341948 | orchestrator | Thursday 29 May 2025 00:29:25 +0000 (0:00:03.525) 0:00:33.226 ********** 2025-05-29 00:29:26.462297 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:29:26.462456 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:29:26.462473 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:29:26.462551 | orchestrator | 2025-05-29 00:29:26.462991 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-05-29 00:29:26.465893 | orchestrator | 2025-05-29 00:29:26.465987 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-05-29 00:29:26.466520 | orchestrator | Thursday 29 May 2025 00:29:26 +0000 (0:00:01.132) 0:00:34.359 ********** 2025-05-29 00:29:28.186934 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:29:31.519601 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:29:31.520255 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:29:31.521008 | orchestrator | ok: [testbed-manager] 2025-05-29 00:29:31.522226 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:29:31.522943 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:29:31.523856 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:29:31.524931 | orchestrator | 2025-05-29 00:29:31.526680 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-29 00:29:31.526761 | orchestrator | 2025-05-29 00:29:31 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-29 00:29:31.526783 | orchestrator | 2025-05-29 00:29:31 | INFO  | Please wait and do not abort execution. 2025-05-29 00:29:31.527675 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-29 00:29:31.528716 | orchestrator | testbed-node-0 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-29 00:29:31.528743 | orchestrator | testbed-node-1 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-29 00:29:31.528861 | orchestrator | testbed-node-2 : ok=3  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-29 00:29:31.529729 | orchestrator | testbed-node-3 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-29 00:29:31.531209 | orchestrator | testbed-node-4 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-29 00:29:31.531682 | orchestrator | testbed-node-5 : ok=16  changed=7  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-29 00:29:31.532039 | orchestrator | 2025-05-29 00:29:31.533226 | orchestrator | Thursday 29 May 2025 00:29:31 +0000 (0:00:05.057) 0:00:39.416 ********** 2025-05-29 00:29:31.533800 | orchestrator | =============================================================================== 2025-05-29 00:29:31.534551 | orchestrator | osism.commons.repository : Update package cache ------------------------ 13.79s 2025-05-29 00:29:31.535330 | orchestrator | Install required packages (Debian) -------------------------------------- 7.46s 2025-05-29 00:29:31.535738 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.06s 2025-05-29 00:29:31.536197 | orchestrator | Copy fact files --------------------------------------------------------- 3.53s 2025-05-29 00:29:31.536860 | orchestrator | Create custom facts directory ------------------------------------------- 2.34s 2025-05-29 00:29:31.536881 | orchestrator | Copy fact file ---------------------------------------------------------- 1.97s 2025-05-29 00:29:31.537172 | orchestrator | osism.commons.repository : Force update of package cache ---------------- 1.13s 2025-05-29 00:29:31.537681 | orchestrator | osism.commons.repository : Copy ubuntu.sources file --------------------- 1.02s 2025-05-29 00:29:31.538123 | orchestrator | osism.commons.repository : Copy 99osism apt configuration --------------- 0.98s 2025-05-29 00:29:31.538700 | orchestrator | osism.commons.repository : Remove sources.list file --------------------- 0.48s 2025-05-29 00:29:31.538956 | orchestrator | osism.commons.repository : Create /etc/apt/sources.list.d directory ----- 0.44s 2025-05-29 00:29:31.539373 | orchestrator | Create custom facts directory ------------------------------------------- 0.42s 2025-05-29 00:29:31.539657 | orchestrator | osism.commons.repository : Set repositories to default ------------------ 0.13s 2025-05-29 00:29:31.539992 | orchestrator | osism.commons.repository : Include distribution specific repository tasks --- 0.13s 2025-05-29 00:29:31.540391 | orchestrator | osism.commons.repository : Gather variables for each operating system --- 0.13s 2025-05-29 00:29:31.540704 | orchestrator | osism.commons.repository : Set repository_default fact to default value --- 0.12s 2025-05-29 00:29:31.541522 | orchestrator | Install required packages (RedHat) -------------------------------------- 0.12s 2025-05-29 00:29:31.542203 | orchestrator | osism.commons.repository : Include tasks for Ubuntu < 24.04 ------------- 0.11s 2025-05-29 00:29:31.958120 | orchestrator | + osism apply bootstrap 2025-05-29 00:29:33.368187 | orchestrator | 2025-05-29 00:29:33 | INFO  | Task 639a4169-dc1b-4ce8-8a0e-6095c188fe75 (bootstrap) was prepared for execution. 2025-05-29 00:29:33.368294 | orchestrator | 2025-05-29 00:29:33 | INFO  | It takes a moment until task 639a4169-dc1b-4ce8-8a0e-6095c188fe75 (bootstrap) has been started and output is visible here. 2025-05-29 00:29:36.467606 | orchestrator | 2025-05-29 00:29:36.467761 | orchestrator | PLAY [Group hosts based on state bootstrap] ************************************ 2025-05-29 00:29:36.468241 | orchestrator | 2025-05-29 00:29:36.468971 | orchestrator | TASK [Group hosts based on state bootstrap] ************************************ 2025-05-29 00:29:36.469818 | orchestrator | Thursday 29 May 2025 00:29:36 +0000 (0:00:00.108) 0:00:00.108 ********** 2025-05-29 00:29:36.543692 | orchestrator | ok: [testbed-manager] 2025-05-29 00:29:36.566909 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:29:36.590522 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:29:36.618218 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:29:36.697714 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:29:36.698164 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:29:36.699204 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:29:36.699895 | orchestrator | 2025-05-29 00:29:36.703677 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-05-29 00:29:36.703723 | orchestrator | 2025-05-29 00:29:36.703735 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-05-29 00:29:36.703747 | orchestrator | Thursday 29 May 2025 00:29:36 +0000 (0:00:00.233) 0:00:00.342 ********** 2025-05-29 00:29:40.402282 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:29:40.402786 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:29:40.403111 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:29:40.404036 | orchestrator | ok: [testbed-manager] 2025-05-29 00:29:40.404752 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:29:40.405756 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:29:40.406946 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:29:40.408476 | orchestrator | 2025-05-29 00:29:40.409434 | orchestrator | PLAY [Gather facts for all hosts (if using --limit)] *************************** 2025-05-29 00:29:40.409594 | orchestrator | 2025-05-29 00:29:40.410388 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-05-29 00:29:40.410726 | orchestrator | Thursday 29 May 2025 00:29:40 +0000 (0:00:03.704) 0:00:04.046 ********** 2025-05-29 00:29:40.468880 | orchestrator | skipping: [testbed-manager] => (item=testbed-manager)  2025-05-29 00:29:40.505464 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-3)  2025-05-29 00:29:40.506100 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-4)  2025-05-29 00:29:40.506514 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-5)  2025-05-29 00:29:40.546442 | orchestrator | skipping: [testbed-node-3] => (item=testbed-manager)  2025-05-29 00:29:40.546536 | orchestrator | skipping: [testbed-node-4] => (item=testbed-manager)  2025-05-29 00:29:40.546619 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-29 00:29:40.546984 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-0)  2025-05-29 00:29:40.547763 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-05-29 00:29:40.548269 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-29 00:29:40.595906 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-05-29 00:29:40.596012 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-1)  2025-05-29 00:29:40.596027 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-29 00:29:40.596040 | orchestrator | skipping: [testbed-node-5] => (item=testbed-manager)  2025-05-29 00:29:40.596051 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-05-29 00:29:40.845387 | orchestrator | skipping: [testbed-manager] => (item=testbed-node-2)  2025-05-29 00:29:40.845941 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-05-29 00:29:40.846517 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-05-29 00:29:40.847045 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-05-29 00:29:40.847757 | orchestrator | skipping: [testbed-node-0] => (item=testbed-manager)  2025-05-29 00:29:40.848173 | orchestrator | skipping: [testbed-manager] 2025-05-29 00:29:40.848908 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-05-29 00:29:40.850134 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-05-29 00:29:40.851197 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-05-29 00:29:40.851856 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-05-29 00:29:40.852720 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-05-29 00:29:40.853229 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-29 00:29:40.854390 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:29:40.855137 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-05-29 00:29:40.856058 | orchestrator | skipping: [testbed-node-1] => (item=testbed-manager)  2025-05-29 00:29:40.856704 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-05-29 00:29:40.857583 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:29:40.858568 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-29 00:29:40.859354 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-05-29 00:29:40.860324 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-05-29 00:29:40.860843 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-29 00:29:40.861742 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-05-29 00:29:40.862188 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:29:40.862842 | orchestrator | skipping: [testbed-node-2] => (item=testbed-manager)  2025-05-29 00:29:40.863326 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-29 00:29:40.864162 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-05-29 00:29:40.864743 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-05-29 00:29:40.865303 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-29 00:29:40.865754 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-05-29 00:29:40.866305 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-05-29 00:29:40.866799 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-29 00:29:40.867341 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:29:40.868025 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-05-29 00:29:40.868488 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-05-29 00:29:40.869055 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-05-29 00:29:40.869416 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-05-29 00:29:40.869945 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-05-29 00:29:40.870457 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:29:40.870945 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-05-29 00:29:40.871419 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-05-29 00:29:40.871844 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:29:40.872318 | orchestrator | 2025-05-29 00:29:40.872827 | orchestrator | PLAY [Apply bootstrap roles part 1] ******************************************** 2025-05-29 00:29:40.873372 | orchestrator | 2025-05-29 00:29:40.873745 | orchestrator | TASK [osism.commons.hostname : Set hostname_name fact] ************************* 2025-05-29 00:29:40.874135 | orchestrator | Thursday 29 May 2025 00:29:40 +0000 (0:00:00.442) 0:00:04.489 ********** 2025-05-29 00:29:40.903769 | orchestrator | ok: [testbed-manager] 2025-05-29 00:29:40.949157 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:29:40.974971 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:29:40.994627 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:29:41.042433 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:29:41.042985 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:29:41.043760 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:29:41.044484 | orchestrator | 2025-05-29 00:29:41.044981 | orchestrator | TASK [osism.commons.hostname : Set hostname] *********************************** 2025-05-29 00:29:41.045594 | orchestrator | Thursday 29 May 2025 00:29:41 +0000 (0:00:00.197) 0:00:04.687 ********** 2025-05-29 00:29:42.276566 | orchestrator | ok: [testbed-manager] 2025-05-29 00:29:42.277271 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:29:42.278114 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:29:42.278953 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:29:42.280285 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:29:42.280867 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:29:42.281595 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:29:42.282334 | orchestrator | 2025-05-29 00:29:42.283053 | orchestrator | TASK [osism.commons.hostname : Copy /etc/hostname] ***************************** 2025-05-29 00:29:42.283794 | orchestrator | Thursday 29 May 2025 00:29:42 +0000 (0:00:01.233) 0:00:05.920 ********** 2025-05-29 00:29:43.475273 | orchestrator | ok: [testbed-manager] 2025-05-29 00:29:43.476455 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:29:43.477348 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:29:43.478537 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:29:43.479983 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:29:43.481005 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:29:43.481688 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:29:43.484231 | orchestrator | 2025-05-29 00:29:43.484771 | orchestrator | TASK [osism.commons.hosts : Include type specific tasks] *********************** 2025-05-29 00:29:43.485295 | orchestrator | Thursday 29 May 2025 00:29:43 +0000 (0:00:01.197) 0:00:07.118 ********** 2025-05-29 00:29:43.743972 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/hosts/tasks/type-template.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-29 00:29:43.744429 | orchestrator | 2025-05-29 00:29:43.745787 | orchestrator | TASK [osism.commons.hosts : Copy /etc/hosts file] ****************************** 2025-05-29 00:29:43.746797 | orchestrator | Thursday 29 May 2025 00:29:43 +0000 (0:00:00.268) 0:00:07.386 ********** 2025-05-29 00:29:45.758009 | orchestrator | changed: [testbed-manager] 2025-05-29 00:29:45.758891 | orchestrator | changed: [testbed-node-4] 2025-05-29 00:29:45.759937 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:29:45.761020 | orchestrator | changed: [testbed-node-5] 2025-05-29 00:29:45.762929 | orchestrator | changed: [testbed-node-3] 2025-05-29 00:29:45.763615 | orchestrator | changed: [testbed-node-1] 2025-05-29 00:29:45.763946 | orchestrator | changed: [testbed-node-2] 2025-05-29 00:29:45.764804 | orchestrator | 2025-05-29 00:29:45.765466 | orchestrator | TASK [osism.commons.proxy : Include distribution specific tasks] *************** 2025-05-29 00:29:45.765808 | orchestrator | Thursday 29 May 2025 00:29:45 +0000 (0:00:02.012) 0:00:09.399 ********** 2025-05-29 00:29:45.822262 | orchestrator | skipping: [testbed-manager] 2025-05-29 00:29:46.007992 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/proxy/tasks/Debian-family.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-29 00:29:46.010103 | orchestrator | 2025-05-29 00:29:46.010138 | orchestrator | TASK [osism.commons.proxy : Configure proxy parameters for apt] **************** 2025-05-29 00:29:46.010152 | orchestrator | Thursday 29 May 2025 00:29:45 +0000 (0:00:00.249) 0:00:09.649 ********** 2025-05-29 00:29:47.022840 | orchestrator | changed: [testbed-node-4] 2025-05-29 00:29:47.023796 | orchestrator | changed: [testbed-node-3] 2025-05-29 00:29:47.023833 | orchestrator | changed: [testbed-node-5] 2025-05-29 00:29:47.023846 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:29:47.023873 | orchestrator | changed: [testbed-node-1] 2025-05-29 00:29:47.023885 | orchestrator | changed: [testbed-node-2] 2025-05-29 00:29:47.023896 | orchestrator | 2025-05-29 00:29:47.023971 | orchestrator | TASK [osism.commons.proxy : Set system wide settings in environment file] ****** 2025-05-29 00:29:47.024241 | orchestrator | Thursday 29 May 2025 00:29:47 +0000 (0:00:01.012) 0:00:10.661 ********** 2025-05-29 00:29:47.086871 | orchestrator | skipping: [testbed-manager] 2025-05-29 00:29:47.585047 | orchestrator | changed: [testbed-node-2] 2025-05-29 00:29:47.585186 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:29:47.585898 | orchestrator | changed: [testbed-node-4] 2025-05-29 00:29:47.586533 | orchestrator | changed: [testbed-node-1] 2025-05-29 00:29:47.587169 | orchestrator | changed: [testbed-node-5] 2025-05-29 00:29:47.588144 | orchestrator | changed: [testbed-node-3] 2025-05-29 00:29:47.588435 | orchestrator | 2025-05-29 00:29:47.588994 | orchestrator | TASK [osism.commons.proxy : Remove system wide settings in environment file] *** 2025-05-29 00:29:47.589681 | orchestrator | Thursday 29 May 2025 00:29:47 +0000 (0:00:00.567) 0:00:11.229 ********** 2025-05-29 00:29:47.690485 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:29:47.718628 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:29:47.736428 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:29:48.023344 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:29:48.023979 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:29:48.025718 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:29:48.026582 | orchestrator | ok: [testbed-manager] 2025-05-29 00:29:48.030395 | orchestrator | 2025-05-29 00:29:48.032949 | orchestrator | TASK [osism.commons.resolvconf : Check minimum and maximum number of name servers] *** 2025-05-29 00:29:48.033212 | orchestrator | Thursday 29 May 2025 00:29:48 +0000 (0:00:00.435) 0:00:11.664 ********** 2025-05-29 00:29:48.087569 | orchestrator | skipping: [testbed-manager] 2025-05-29 00:29:48.116502 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:29:48.133531 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:29:48.163063 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:29:48.243394 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:29:48.244449 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:29:48.245811 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:29:48.246585 | orchestrator | 2025-05-29 00:29:48.246886 | orchestrator | TASK [osism.commons.resolvconf : Include resolvconf tasks] ********************* 2025-05-29 00:29:48.247589 | orchestrator | Thursday 29 May 2025 00:29:48 +0000 (0:00:00.223) 0:00:11.887 ********** 2025-05-29 00:29:48.523754 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-resolv.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-29 00:29:48.523951 | orchestrator | 2025-05-29 00:29:48.524009 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific installation tasks] *** 2025-05-29 00:29:48.524471 | orchestrator | Thursday 29 May 2025 00:29:48 +0000 (0:00:00.279) 0:00:12.167 ********** 2025-05-29 00:29:48.851803 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-29 00:29:48.851989 | orchestrator | 2025-05-29 00:29:48.852521 | orchestrator | TASK [osism.commons.resolvconf : Remove packages configuring /etc/resolv.conf] *** 2025-05-29 00:29:48.854926 | orchestrator | Thursday 29 May 2025 00:29:48 +0000 (0:00:00.327) 0:00:12.494 ********** 2025-05-29 00:29:50.045032 | orchestrator | ok: [testbed-manager] 2025-05-29 00:29:50.045208 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:29:50.046000 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:29:50.046933 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:29:50.048109 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:29:50.048902 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:29:50.049253 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:29:50.049670 | orchestrator | 2025-05-29 00:29:50.051318 | orchestrator | TASK [osism.commons.resolvconf : Install package systemd-resolved] ************* 2025-05-29 00:29:50.051365 | orchestrator | Thursday 29 May 2025 00:29:50 +0000 (0:00:01.191) 0:00:13.685 ********** 2025-05-29 00:29:50.112596 | orchestrator | skipping: [testbed-manager] 2025-05-29 00:29:50.133225 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:29:50.159324 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:29:50.191023 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:29:50.248013 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:29:50.248471 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:29:50.249439 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:29:50.250222 | orchestrator | 2025-05-29 00:29:50.250805 | orchestrator | TASK [osism.commons.resolvconf : Retrieve file status of /etc/resolv.conf] ***** 2025-05-29 00:29:50.251834 | orchestrator | Thursday 29 May 2025 00:29:50 +0000 (0:00:00.204) 0:00:13.890 ********** 2025-05-29 00:29:50.790801 | orchestrator | ok: [testbed-manager] 2025-05-29 00:29:50.791844 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:29:50.792459 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:29:50.793850 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:29:50.794726 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:29:50.795331 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:29:50.795657 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:29:50.796390 | orchestrator | 2025-05-29 00:29:50.797044 | orchestrator | TASK [osism.commons.resolvconf : Archive existing file /etc/resolv.conf] ******* 2025-05-29 00:29:50.797421 | orchestrator | Thursday 29 May 2025 00:29:50 +0000 (0:00:00.543) 0:00:14.434 ********** 2025-05-29 00:29:50.875024 | orchestrator | skipping: [testbed-manager] 2025-05-29 00:29:50.931536 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:29:50.960000 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:29:51.020157 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:29:51.020355 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:29:51.020417 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:29:51.020828 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:29:51.021365 | orchestrator | 2025-05-29 00:29:51.021408 | orchestrator | TASK [osism.commons.resolvconf : Link /run/systemd/resolve/stub-resolv.conf to /etc/resolv.conf] *** 2025-05-29 00:29:51.021660 | orchestrator | Thursday 29 May 2025 00:29:51 +0000 (0:00:00.230) 0:00:14.664 ********** 2025-05-29 00:29:51.549869 | orchestrator | ok: [testbed-manager] 2025-05-29 00:29:51.550092 | orchestrator | changed: [testbed-node-4] 2025-05-29 00:29:51.550215 | orchestrator | changed: [testbed-node-3] 2025-05-29 00:29:51.550876 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:29:51.551615 | orchestrator | changed: [testbed-node-5] 2025-05-29 00:29:51.552012 | orchestrator | changed: [testbed-node-1] 2025-05-29 00:29:51.552476 | orchestrator | changed: [testbed-node-2] 2025-05-29 00:29:51.552897 | orchestrator | 2025-05-29 00:29:51.553276 | orchestrator | TASK [osism.commons.resolvconf : Copy configuration files] ********************* 2025-05-29 00:29:51.553730 | orchestrator | Thursday 29 May 2025 00:29:51 +0000 (0:00:00.529) 0:00:15.194 ********** 2025-05-29 00:29:52.617958 | orchestrator | ok: [testbed-manager] 2025-05-29 00:29:52.618858 | orchestrator | changed: [testbed-node-3] 2025-05-29 00:29:52.618887 | orchestrator | changed: [testbed-node-4] 2025-05-29 00:29:52.618894 | orchestrator | changed: [testbed-node-5] 2025-05-29 00:29:52.618901 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:29:52.618918 | orchestrator | changed: [testbed-node-1] 2025-05-29 00:29:52.619092 | orchestrator | changed: [testbed-node-2] 2025-05-29 00:29:52.619320 | orchestrator | 2025-05-29 00:29:52.620246 | orchestrator | TASK [osism.commons.resolvconf : Start/enable systemd-resolved service] ******** 2025-05-29 00:29:52.620356 | orchestrator | Thursday 29 May 2025 00:29:52 +0000 (0:00:01.062) 0:00:16.256 ********** 2025-05-29 00:29:53.714896 | orchestrator | ok: [testbed-manager] 2025-05-29 00:29:53.715077 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:29:53.715109 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:29:53.715213 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:29:53.718248 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:29:53.718293 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:29:53.718305 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:29:53.718758 | orchestrator | 2025-05-29 00:29:53.720226 | orchestrator | TASK [osism.commons.resolvconf : Include distribution specific configuration tasks] *** 2025-05-29 00:29:53.720502 | orchestrator | Thursday 29 May 2025 00:29:53 +0000 (0:00:01.101) 0:00:17.357 ********** 2025-05-29 00:29:53.991794 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/resolvconf/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-29 00:29:53.991979 | orchestrator | 2025-05-29 00:29:53.992160 | orchestrator | TASK [osism.commons.resolvconf : Restart systemd-resolved service] ************* 2025-05-29 00:29:53.995704 | orchestrator | Thursday 29 May 2025 00:29:53 +0000 (0:00:00.276) 0:00:17.634 ********** 2025-05-29 00:29:54.067225 | orchestrator | skipping: [testbed-manager] 2025-05-29 00:29:55.450229 | orchestrator | changed: [testbed-node-4] 2025-05-29 00:29:55.450346 | orchestrator | changed: [testbed-node-1] 2025-05-29 00:29:55.450898 | orchestrator | changed: [testbed-node-2] 2025-05-29 00:29:55.451116 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:29:55.452035 | orchestrator | changed: [testbed-node-3] 2025-05-29 00:29:55.452837 | orchestrator | changed: [testbed-node-5] 2025-05-29 00:29:55.452867 | orchestrator | 2025-05-29 00:29:55.452884 | orchestrator | TASK [osism.commons.repository : Gather variables for each operating system] *** 2025-05-29 00:29:55.453079 | orchestrator | Thursday 29 May 2025 00:29:55 +0000 (0:00:01.458) 0:00:19.093 ********** 2025-05-29 00:29:55.531174 | orchestrator | ok: [testbed-manager] 2025-05-29 00:29:55.564801 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:29:55.592095 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:29:55.623542 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:29:55.693201 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:29:55.693295 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:29:55.694197 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:29:55.695614 | orchestrator | 2025-05-29 00:29:55.695935 | orchestrator | TASK [osism.commons.repository : Set repository_default fact to default value] *** 2025-05-29 00:29:55.697046 | orchestrator | Thursday 29 May 2025 00:29:55 +0000 (0:00:00.243) 0:00:19.336 ********** 2025-05-29 00:29:55.789253 | orchestrator | ok: [testbed-manager] 2025-05-29 00:29:55.814254 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:29:55.849425 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:29:55.918445 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:29:55.919003 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:29:55.920074 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:29:55.921089 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:29:55.921772 | orchestrator | 2025-05-29 00:29:55.922693 | orchestrator | TASK [osism.commons.repository : Set repositories to default] ****************** 2025-05-29 00:29:55.923748 | orchestrator | Thursday 29 May 2025 00:29:55 +0000 (0:00:00.226) 0:00:19.563 ********** 2025-05-29 00:29:56.026623 | orchestrator | ok: [testbed-manager] 2025-05-29 00:29:56.053547 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:29:56.089773 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:29:56.167901 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:29:56.168349 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:29:56.169463 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:29:56.170131 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:29:56.171169 | orchestrator | 2025-05-29 00:29:56.173143 | orchestrator | TASK [osism.commons.repository : Include distribution specific repository tasks] *** 2025-05-29 00:29:56.173855 | orchestrator | Thursday 29 May 2025 00:29:56 +0000 (0:00:00.249) 0:00:19.812 ********** 2025-05-29 00:29:56.430751 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/repository/tasks/Ubuntu.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-29 00:29:56.430943 | orchestrator | 2025-05-29 00:29:56.431564 | orchestrator | TASK [osism.commons.repository : Create /etc/apt/sources.list.d directory] ***** 2025-05-29 00:29:56.434582 | orchestrator | Thursday 29 May 2025 00:29:56 +0000 (0:00:00.262) 0:00:20.074 ********** 2025-05-29 00:29:56.993960 | orchestrator | ok: [testbed-manager] 2025-05-29 00:29:56.994828 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:29:56.995830 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:29:56.997053 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:29:56.997948 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:29:56.998546 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:29:56.999162 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:29:56.999904 | orchestrator | 2025-05-29 00:29:57.000801 | orchestrator | TASK [osism.commons.repository : Include tasks for Ubuntu < 24.04] ************* 2025-05-29 00:29:57.001379 | orchestrator | Thursday 29 May 2025 00:29:56 +0000 (0:00:00.561) 0:00:20.636 ********** 2025-05-29 00:29:57.087154 | orchestrator | skipping: [testbed-manager] 2025-05-29 00:29:57.115090 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:29:57.139741 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:29:57.200483 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:29:57.204496 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:29:57.204534 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:29:57.204546 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:29:57.204599 | orchestrator | 2025-05-29 00:29:57.205972 | orchestrator | TASK [osism.commons.repository : Copy 99osism apt configuration] *************** 2025-05-29 00:29:57.206754 | orchestrator | Thursday 29 May 2025 00:29:57 +0000 (0:00:00.207) 0:00:20.844 ********** 2025-05-29 00:29:58.231538 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:29:58.234330 | orchestrator | changed: [testbed-manager] 2025-05-29 00:29:58.234406 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:29:58.234420 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:29:58.234431 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:29:58.234442 | orchestrator | changed: [testbed-node-1] 2025-05-29 00:29:58.234501 | orchestrator | changed: [testbed-node-2] 2025-05-29 00:29:58.234731 | orchestrator | 2025-05-29 00:29:58.235161 | orchestrator | TASK [osism.commons.repository : Remove sources.list file] ********************* 2025-05-29 00:29:58.235503 | orchestrator | Thursday 29 May 2025 00:29:58 +0000 (0:00:01.029) 0:00:21.874 ********** 2025-05-29 00:29:58.796171 | orchestrator | ok: [testbed-manager] 2025-05-29 00:29:58.796609 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:29:58.797420 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:29:58.797938 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:29:58.799007 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:29:58.800096 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:29:58.800455 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:29:58.801344 | orchestrator | 2025-05-29 00:29:58.802627 | orchestrator | TASK [osism.commons.repository : Copy ubuntu.sources file] ********************* 2025-05-29 00:29:58.803008 | orchestrator | Thursday 29 May 2025 00:29:58 +0000 (0:00:00.565) 0:00:22.439 ********** 2025-05-29 00:29:59.900001 | orchestrator | ok: [testbed-manager] 2025-05-29 00:29:59.902338 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:29:59.902372 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:29:59.903184 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:29:59.904049 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:29:59.904702 | orchestrator | changed: [testbed-node-1] 2025-05-29 00:29:59.905774 | orchestrator | changed: [testbed-node-2] 2025-05-29 00:29:59.906182 | orchestrator | 2025-05-29 00:29:59.907061 | orchestrator | TASK [osism.commons.repository : Update package cache] ************************* 2025-05-29 00:29:59.907406 | orchestrator | Thursday 29 May 2025 00:29:59 +0000 (0:00:01.102) 0:00:23.541 ********** 2025-05-29 00:30:12.528796 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:30:12.528922 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:30:12.529442 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:30:12.530427 | orchestrator | changed: [testbed-manager] 2025-05-29 00:30:12.531037 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:30:12.531778 | orchestrator | changed: [testbed-node-2] 2025-05-29 00:30:12.533572 | orchestrator | changed: [testbed-node-1] 2025-05-29 00:30:12.534471 | orchestrator | 2025-05-29 00:30:12.535228 | orchestrator | TASK [osism.services.rsyslog : Gather variables for each operating system] ***** 2025-05-29 00:30:12.535341 | orchestrator | Thursday 29 May 2025 00:30:12 +0000 (0:00:12.627) 0:00:36.168 ********** 2025-05-29 00:30:12.599917 | orchestrator | ok: [testbed-manager] 2025-05-29 00:30:12.626854 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:30:12.652370 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:30:12.676923 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:30:12.732248 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:30:12.733287 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:30:12.734355 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:30:12.735400 | orchestrator | 2025-05-29 00:30:12.736366 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_user variable to default value] ***** 2025-05-29 00:30:12.737391 | orchestrator | Thursday 29 May 2025 00:30:12 +0000 (0:00:00.207) 0:00:36.375 ********** 2025-05-29 00:30:12.803408 | orchestrator | ok: [testbed-manager] 2025-05-29 00:30:12.830696 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:30:12.851513 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:30:12.876849 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:30:12.929071 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:30:12.929273 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:30:12.930012 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:30:12.930493 | orchestrator | 2025-05-29 00:30:12.931204 | orchestrator | TASK [osism.services.rsyslog : Set rsyslog_workdir variable to default value] *** 2025-05-29 00:30:12.931884 | orchestrator | Thursday 29 May 2025 00:30:12 +0000 (0:00:00.197) 0:00:36.573 ********** 2025-05-29 00:30:12.998430 | orchestrator | ok: [testbed-manager] 2025-05-29 00:30:13.023794 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:30:13.047368 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:30:13.071451 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:30:13.122913 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:30:13.123035 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:30:13.123048 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:30:13.123764 | orchestrator | 2025-05-29 00:30:13.123946 | orchestrator | TASK [osism.services.rsyslog : Include distribution specific install tasks] **** 2025-05-29 00:30:13.124626 | orchestrator | Thursday 29 May 2025 00:30:13 +0000 (0:00:00.194) 0:00:36.768 ********** 2025-05-29 00:30:13.371564 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-29 00:30:13.372859 | orchestrator | 2025-05-29 00:30:13.375553 | orchestrator | TASK [osism.services.rsyslog : Install rsyslog package] ************************ 2025-05-29 00:30:13.375596 | orchestrator | Thursday 29 May 2025 00:30:13 +0000 (0:00:00.247) 0:00:37.015 ********** 2025-05-29 00:30:14.707773 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:30:14.707900 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:30:14.709113 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:30:14.710426 | orchestrator | ok: [testbed-manager] 2025-05-29 00:30:14.711272 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:30:14.711978 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:30:14.713883 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:30:14.714859 | orchestrator | 2025-05-29 00:30:14.716215 | orchestrator | TASK [osism.services.rsyslog : Copy rsyslog.conf configuration file] *********** 2025-05-29 00:30:14.716349 | orchestrator | Thursday 29 May 2025 00:30:14 +0000 (0:00:01.330) 0:00:38.345 ********** 2025-05-29 00:30:15.765974 | orchestrator | changed: [testbed-manager] 2025-05-29 00:30:15.766381 | orchestrator | changed: [testbed-node-4] 2025-05-29 00:30:15.767294 | orchestrator | changed: [testbed-node-3] 2025-05-29 00:30:15.767595 | orchestrator | changed: [testbed-node-5] 2025-05-29 00:30:15.768972 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:30:15.769750 | orchestrator | changed: [testbed-node-1] 2025-05-29 00:30:15.771304 | orchestrator | changed: [testbed-node-2] 2025-05-29 00:30:15.773038 | orchestrator | 2025-05-29 00:30:15.774800 | orchestrator | TASK [osism.services.rsyslog : Manage rsyslog service] ************************* 2025-05-29 00:30:15.774861 | orchestrator | Thursday 29 May 2025 00:30:15 +0000 (0:00:01.062) 0:00:39.407 ********** 2025-05-29 00:30:16.562999 | orchestrator | ok: [testbed-manager] 2025-05-29 00:30:16.563109 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:30:16.563239 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:30:16.564125 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:30:16.565190 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:30:16.565437 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:30:16.565817 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:30:16.566165 | orchestrator | 2025-05-29 00:30:16.568357 | orchestrator | TASK [osism.services.rsyslog : Include fluentd tasks] ************************** 2025-05-29 00:30:16.568382 | orchestrator | Thursday 29 May 2025 00:30:16 +0000 (0:00:00.798) 0:00:40.206 ********** 2025-05-29 00:30:16.856376 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rsyslog/tasks/fluentd.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-29 00:30:16.858826 | orchestrator | 2025-05-29 00:30:16.858850 | orchestrator | TASK [osism.services.rsyslog : Forward syslog message to local fluentd daemon] *** 2025-05-29 00:30:16.858858 | orchestrator | Thursday 29 May 2025 00:30:16 +0000 (0:00:00.292) 0:00:40.498 ********** 2025-05-29 00:30:17.837919 | orchestrator | changed: [testbed-manager] 2025-05-29 00:30:17.838115 | orchestrator | changed: [testbed-node-4] 2025-05-29 00:30:17.839028 | orchestrator | changed: [testbed-node-3] 2025-05-29 00:30:17.839723 | orchestrator | changed: [testbed-node-1] 2025-05-29 00:30:17.841511 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:30:17.842070 | orchestrator | changed: [testbed-node-5] 2025-05-29 00:30:17.842692 | orchestrator | changed: [testbed-node-2] 2025-05-29 00:30:17.843159 | orchestrator | 2025-05-29 00:30:17.843938 | orchestrator | TASK [osism.services.rsyslog : Include additional log server tasks] ************ 2025-05-29 00:30:17.844312 | orchestrator | Thursday 29 May 2025 00:30:17 +0000 (0:00:00.980) 0:00:41.478 ********** 2025-05-29 00:30:17.933822 | orchestrator | skipping: [testbed-manager] 2025-05-29 00:30:17.956069 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:30:17.977610 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:30:18.110857 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:30:18.111820 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:30:18.113181 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:30:18.114010 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:30:18.115068 | orchestrator | 2025-05-29 00:30:18.115734 | orchestrator | TASK [osism.commons.systohc : Install util-linux-extra package] **************** 2025-05-29 00:30:18.116678 | orchestrator | Thursday 29 May 2025 00:30:18 +0000 (0:00:00.276) 0:00:41.755 ********** 2025-05-29 00:30:29.711079 | orchestrator | changed: [testbed-node-4] 2025-05-29 00:30:29.711192 | orchestrator | changed: [testbed-node-1] 2025-05-29 00:30:29.711205 | orchestrator | changed: [testbed-node-2] 2025-05-29 00:30:29.711214 | orchestrator | changed: [testbed-node-5] 2025-05-29 00:30:29.711223 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:30:29.711232 | orchestrator | changed: [testbed-node-3] 2025-05-29 00:30:29.711241 | orchestrator | changed: [testbed-manager] 2025-05-29 00:30:29.711250 | orchestrator | 2025-05-29 00:30:29.711260 | orchestrator | TASK [osism.commons.systohc : Sync hardware clock] ***************************** 2025-05-29 00:30:29.711988 | orchestrator | Thursday 29 May 2025 00:30:29 +0000 (0:00:11.591) 0:00:53.347 ********** 2025-05-29 00:30:30.659468 | orchestrator | ok: [testbed-manager] 2025-05-29 00:30:30.663478 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:30:30.664266 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:30:30.664881 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:30:30.665301 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:30:30.665660 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:30:30.666468 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:30:30.666490 | orchestrator | 2025-05-29 00:30:30.666939 | orchestrator | TASK [osism.commons.configfs : Start sys-kernel-config mount] ****************** 2025-05-29 00:30:30.669094 | orchestrator | Thursday 29 May 2025 00:30:30 +0000 (0:00:00.956) 0:00:54.303 ********** 2025-05-29 00:30:31.606392 | orchestrator | ok: [testbed-manager] 2025-05-29 00:30:31.607942 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:30:31.609012 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:30:31.610615 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:30:31.612004 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:30:31.612746 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:30:31.613578 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:30:31.613978 | orchestrator | 2025-05-29 00:30:31.614861 | orchestrator | TASK [osism.commons.packages : Gather variables for each operating system] ***** 2025-05-29 00:30:31.615487 | orchestrator | Thursday 29 May 2025 00:30:31 +0000 (0:00:00.942) 0:00:55.246 ********** 2025-05-29 00:30:31.677919 | orchestrator | ok: [testbed-manager] 2025-05-29 00:30:31.707666 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:30:31.724608 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:30:31.752619 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:30:31.804008 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:30:31.804311 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:30:31.804736 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:30:31.807176 | orchestrator | 2025-05-29 00:30:31.807472 | orchestrator | TASK [osism.commons.packages : Set required_packages_distribution variable to default value] *** 2025-05-29 00:30:31.808072 | orchestrator | Thursday 29 May 2025 00:30:31 +0000 (0:00:00.202) 0:00:55.448 ********** 2025-05-29 00:30:31.881187 | orchestrator | ok: [testbed-manager] 2025-05-29 00:30:31.902135 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:30:31.928620 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:30:31.949838 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:30:32.021660 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:30:32.022329 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:30:32.023418 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:30:32.024978 | orchestrator | 2025-05-29 00:30:32.025518 | orchestrator | TASK [osism.commons.packages : Include distribution specific package tasks] **** 2025-05-29 00:30:32.026744 | orchestrator | Thursday 29 May 2025 00:30:32 +0000 (0:00:00.217) 0:00:55.665 ********** 2025-05-29 00:30:32.316055 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/packages/tasks/package-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-29 00:30:32.316472 | orchestrator | 2025-05-29 00:30:32.317324 | orchestrator | TASK [osism.commons.packages : Install needrestart package] ******************** 2025-05-29 00:30:32.318629 | orchestrator | Thursday 29 May 2025 00:30:32 +0000 (0:00:00.292) 0:00:55.958 ********** 2025-05-29 00:30:33.832379 | orchestrator | ok: [testbed-manager] 2025-05-29 00:30:33.832881 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:30:33.834338 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:30:33.834912 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:30:33.835621 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:30:33.836808 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:30:33.838248 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:30:33.838798 | orchestrator | 2025-05-29 00:30:33.840938 | orchestrator | TASK [osism.commons.packages : Set needrestart mode] *************************** 2025-05-29 00:30:33.841450 | orchestrator | Thursday 29 May 2025 00:30:33 +0000 (0:00:01.516) 0:00:57.474 ********** 2025-05-29 00:30:34.380798 | orchestrator | changed: [testbed-manager] 2025-05-29 00:30:34.380992 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:30:34.381214 | orchestrator | changed: [testbed-node-5] 2025-05-29 00:30:34.381519 | orchestrator | changed: [testbed-node-1] 2025-05-29 00:30:34.382173 | orchestrator | changed: [testbed-node-4] 2025-05-29 00:30:34.382557 | orchestrator | changed: [testbed-node-3] 2025-05-29 00:30:34.382821 | orchestrator | changed: [testbed-node-2] 2025-05-29 00:30:34.383226 | orchestrator | 2025-05-29 00:30:34.383604 | orchestrator | TASK [osism.commons.packages : Set apt_cache_valid_time variable to default value] *** 2025-05-29 00:30:34.384131 | orchestrator | Thursday 29 May 2025 00:30:34 +0000 (0:00:00.550) 0:00:58.024 ********** 2025-05-29 00:30:34.478843 | orchestrator | ok: [testbed-manager] 2025-05-29 00:30:34.508420 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:30:34.542107 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:30:34.571680 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:30:34.745399 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:30:34.745582 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:30:34.746801 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:30:34.746931 | orchestrator | 2025-05-29 00:30:34.747613 | orchestrator | TASK [osism.commons.packages : Update package cache] *************************** 2025-05-29 00:30:34.748071 | orchestrator | Thursday 29 May 2025 00:30:34 +0000 (0:00:00.363) 0:00:58.388 ********** 2025-05-29 00:30:35.874136 | orchestrator | ok: [testbed-manager] 2025-05-29 00:30:35.874585 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:30:35.874689 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:30:35.875217 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:30:35.876046 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:30:35.876206 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:30:35.877959 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:30:35.878373 | orchestrator | 2025-05-29 00:30:35.878805 | orchestrator | TASK [osism.commons.packages : Download upgrade packages] ********************** 2025-05-29 00:30:35.878987 | orchestrator | Thursday 29 May 2025 00:30:35 +0000 (0:00:01.123) 0:00:59.512 ********** 2025-05-29 00:30:37.469433 | orchestrator | changed: [testbed-manager] 2025-05-29 00:30:37.469524 | orchestrator | changed: [testbed-node-4] 2025-05-29 00:30:37.469999 | orchestrator | changed: [testbed-node-5] 2025-05-29 00:30:37.470701 | orchestrator | changed: [testbed-node-1] 2025-05-29 00:30:37.471324 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:30:37.471908 | orchestrator | changed: [testbed-node-3] 2025-05-29 00:30:37.472282 | orchestrator | changed: [testbed-node-2] 2025-05-29 00:30:37.473004 | orchestrator | 2025-05-29 00:30:37.473629 | orchestrator | TASK [osism.commons.packages : Upgrade packages] ******************************* 2025-05-29 00:30:37.474006 | orchestrator | Thursday 29 May 2025 00:30:37 +0000 (0:00:01.598) 0:01:01.110 ********** 2025-05-29 00:30:39.713455 | orchestrator | ok: [testbed-manager] 2025-05-29 00:30:39.713700 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:30:39.714290 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:30:39.715951 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:30:39.716739 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:30:39.718437 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:30:39.719570 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:30:39.720017 | orchestrator | 2025-05-29 00:30:39.721196 | orchestrator | TASK [osism.commons.packages : Download required packages] ********************* 2025-05-29 00:30:39.721465 | orchestrator | Thursday 29 May 2025 00:30:39 +0000 (0:00:02.244) 0:01:03.354 ********** 2025-05-29 00:31:17.764583 | orchestrator | ok: [testbed-manager] 2025-05-29 00:31:17.764726 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:31:17.764743 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:31:17.764754 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:31:17.764832 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:31:17.765106 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:31:17.765500 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:31:17.766195 | orchestrator | 2025-05-29 00:31:17.766545 | orchestrator | TASK [osism.commons.packages : Install required packages] ********************** 2025-05-29 00:31:17.767038 | orchestrator | Thursday 29 May 2025 00:31:17 +0000 (0:00:38.049) 0:01:41.403 ********** 2025-05-29 00:32:41.531553 | orchestrator | changed: [testbed-manager] 2025-05-29 00:32:41.531676 | orchestrator | changed: [testbed-node-4] 2025-05-29 00:32:41.531693 | orchestrator | changed: [testbed-node-1] 2025-05-29 00:32:41.531706 | orchestrator | changed: [testbed-node-2] 2025-05-29 00:32:41.531717 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:32:41.531728 | orchestrator | changed: [testbed-node-3] 2025-05-29 00:32:41.531739 | orchestrator | changed: [testbed-node-5] 2025-05-29 00:32:41.531750 | orchestrator | 2025-05-29 00:32:41.531763 | orchestrator | TASK [osism.commons.packages : Remove useless packages from the cache] ********* 2025-05-29 00:32:41.531775 | orchestrator | Thursday 29 May 2025 00:32:41 +0000 (0:01:23.762) 0:03:05.166 ********** 2025-05-29 00:32:43.102066 | orchestrator | changed: [testbed-manager] 2025-05-29 00:32:43.102698 | orchestrator | changed: [testbed-node-4] 2025-05-29 00:32:43.102934 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:32:43.104158 | orchestrator | changed: [testbed-node-2] 2025-05-29 00:32:43.105450 | orchestrator | changed: [testbed-node-3] 2025-05-29 00:32:43.107969 | orchestrator | changed: [testbed-node-5] 2025-05-29 00:32:43.108941 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:32:43.109661 | orchestrator | 2025-05-29 00:32:43.110378 | orchestrator | TASK [osism.commons.packages : Remove dependencies that are no longer required] *** 2025-05-29 00:32:43.110656 | orchestrator | Thursday 29 May 2025 00:32:43 +0000 (0:00:01.578) 0:03:06.744 ********** 2025-05-29 00:32:55.295930 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:32:55.296042 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:32:55.296058 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:32:55.296069 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:32:55.296081 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:32:55.296446 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:32:55.297039 | orchestrator | changed: [testbed-manager] 2025-05-29 00:32:55.298158 | orchestrator | 2025-05-29 00:32:55.298905 | orchestrator | TASK [osism.commons.sysctl : Include sysctl tasks] ***************************** 2025-05-29 00:32:55.299647 | orchestrator | Thursday 29 May 2025 00:32:55 +0000 (0:00:12.186) 0:03:18.930 ********** 2025-05-29 00:32:55.654980 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'elasticsearch', 'value': [{'name': 'vm.max_map_count', 'value': 262144}]}) 2025-05-29 00:32:55.655306 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'rabbitmq', 'value': [{'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}, {'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}, {'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}, {'name': 'net.core.wmem_max', 'value': 16777216}, {'name': 'net.core.rmem_max', 'value': 16777216}, {'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}, {'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}, {'name': 'net.core.somaxconn', 'value': 4096}, {'name': 'net.ipv4.tcp_syncookies', 'value': 0}, {'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}]}) 2025-05-29 00:32:55.657895 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'generic', 'value': [{'name': 'vm.swappiness', 'value': 1}]}) 2025-05-29 00:32:55.666305 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'compute', 'value': [{'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}]}) 2025-05-29 00:32:55.666531 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/sysctl/tasks/sysctl.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 => (item={'key': 'k3s_node', 'value': [{'name': 'fs.inotify.max_user_instances', 'value': 1024}]}) 2025-05-29 00:32:55.666808 | orchestrator | 2025-05-29 00:32:55.667202 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on elasticsearch] *********** 2025-05-29 00:32:55.667610 | orchestrator | Thursday 29 May 2025 00:32:55 +0000 (0:00:00.368) 0:03:19.299 ********** 2025-05-29 00:32:55.711475 | orchestrator | skipping: [testbed-manager] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-05-29 00:32:55.739205 | orchestrator | skipping: [testbed-manager] 2025-05-29 00:32:55.739338 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-05-29 00:32:55.770188 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:32:55.770798 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-05-29 00:32:55.774210 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'vm.max_map_count', 'value': 262144})  2025-05-29 00:32:55.792957 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:32:55.817699 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:32:56.401680 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-05-29 00:32:56.401887 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-05-29 00:32:56.401909 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-05-29 00:32:56.402326 | orchestrator | 2025-05-29 00:32:56.402935 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on rabbitmq] **************** 2025-05-29 00:32:56.403291 | orchestrator | Thursday 29 May 2025 00:32:56 +0000 (0:00:00.746) 0:03:20.045 ********** 2025-05-29 00:32:56.479235 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-05-29 00:32:56.479516 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-05-29 00:32:56.479535 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-05-29 00:32:56.479746 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-05-29 00:32:56.480134 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-05-29 00:32:56.480913 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-05-29 00:32:56.481241 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-05-29 00:32:56.484101 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-05-29 00:32:56.484156 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-05-29 00:32:56.484169 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-05-29 00:32:56.484181 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-05-29 00:32:56.484192 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-05-29 00:32:56.484203 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-05-29 00:32:56.484214 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-05-29 00:32:56.484224 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-05-29 00:32:56.511924 | orchestrator | skipping: [testbed-manager] 2025-05-29 00:32:56.512369 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-05-29 00:32:56.512807 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-05-29 00:32:56.513823 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-05-29 00:32:56.514368 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-05-29 00:32:56.514564 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-05-29 00:32:56.517125 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-05-29 00:32:56.547693 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-05-29 00:32:56.548198 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:32:56.548225 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-05-29 00:32:56.548339 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-05-29 00:32:56.549733 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-05-29 00:32:56.549755 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-05-29 00:32:56.550105 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-05-29 00:32:56.550373 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-05-29 00:32:56.550693 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-05-29 00:32:56.553609 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6})  2025-05-29 00:32:56.553689 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-05-29 00:32:56.554269 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3})  2025-05-29 00:32:56.554365 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3})  2025-05-29 00:32:56.580154 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.wmem_max', 'value': 16777216})  2025-05-29 00:32:56.580217 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.rmem_max', 'value': 16777216})  2025-05-29 00:32:56.580319 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20})  2025-05-29 00:32:56.580443 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1})  2025-05-29 00:32:56.580471 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:32:56.580670 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.core.somaxconn', 'value': 4096})  2025-05-29 00:32:56.581950 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0})  2025-05-29 00:32:56.582439 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192})  2025-05-29 00:32:56.603596 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:33:01.184082 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-05-29 00:33:01.186131 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-05-29 00:33:01.186170 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-05-29 00:33:01.187507 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-05-29 00:33:01.188782 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-05-29 00:33:01.189321 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-05-29 00:33:01.189948 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_time', 'value': 6}) 2025-05-29 00:33:01.191400 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-05-29 00:33:01.191862 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-05-29 00:33:01.192816 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_intvl', 'value': 3}) 2025-05-29 00:33:01.194928 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-05-29 00:33:01.194999 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_keepalive_probes', 'value': 3}) 2025-05-29 00:33:01.195021 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.wmem_max', 'value': 16777216}) 2025-05-29 00:33:01.195037 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-05-29 00:33:01.195853 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.rmem_max', 'value': 16777216}) 2025-05-29 00:33:01.195918 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-05-29 00:33:01.195976 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-05-29 00:33:01.196894 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_fin_timeout', 'value': 20}) 2025-05-29 00:33:01.197720 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-05-29 00:33:01.197775 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-05-29 00:33:01.197833 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_tw_reuse', 'value': 1}) 2025-05-29 00:33:01.199025 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-05-29 00:33:01.199063 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-05-29 00:33:01.200497 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.core.somaxconn', 'value': 4096}) 2025-05-29 00:33:01.201475 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-05-29 00:33:01.202669 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-05-29 00:33:01.203949 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_syncookies', 'value': 0}) 2025-05-29 00:33:01.204266 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-05-29 00:33:01.204978 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-05-29 00:33:01.205612 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_max_syn_backlog', 'value': 8192}) 2025-05-29 00:33:01.205971 | orchestrator | 2025-05-29 00:33:01.206631 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on generic] ***************** 2025-05-29 00:33:01.207622 | orchestrator | Thursday 29 May 2025 00:33:01 +0000 (0:00:04.780) 0:03:24.825 ********** 2025-05-29 00:33:01.788436 | orchestrator | changed: [testbed-manager] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-05-29 00:33:01.788549 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-05-29 00:33:01.789064 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-05-29 00:33:01.789440 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-05-29 00:33:01.790299 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-05-29 00:33:01.793189 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-05-29 00:33:01.794114 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.swappiness', 'value': 1}) 2025-05-29 00:33:01.794530 | orchestrator | 2025-05-29 00:33:01.795593 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on compute] ***************** 2025-05-29 00:33:01.795672 | orchestrator | Thursday 29 May 2025 00:33:01 +0000 (0:00:00.606) 0:03:25.432 ********** 2025-05-29 00:33:01.847831 | orchestrator | skipping: [testbed-manager] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-05-29 00:33:01.873891 | orchestrator | skipping: [testbed-manager] 2025-05-29 00:33:01.950841 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-05-29 00:33:02.310124 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-05-29 00:33:02.310318 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:33:02.312062 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:33:02.312643 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576})  2025-05-29 00:33:02.313126 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:33:02.313571 | orchestrator | changed: [testbed-node-3] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-05-29 00:33:02.314133 | orchestrator | changed: [testbed-node-4] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-05-29 00:33:02.314597 | orchestrator | changed: [testbed-node-5] => (item={'name': 'net.netfilter.nf_conntrack_max', 'value': 1048576}) 2025-05-29 00:33:02.315028 | orchestrator | 2025-05-29 00:33:02.315666 | orchestrator | TASK [osism.commons.sysctl : Set sysctl parameters on k3s_node] **************** 2025-05-29 00:33:02.316019 | orchestrator | Thursday 29 May 2025 00:33:02 +0000 (0:00:00.520) 0:03:25.952 ********** 2025-05-29 00:33:02.363335 | orchestrator | skipping: [testbed-manager] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-05-29 00:33:02.385432 | orchestrator | skipping: [testbed-manager] 2025-05-29 00:33:02.464352 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-05-29 00:33:02.464451 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-05-29 00:33:03.880638 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:33:03.880747 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:33:03.881470 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024})  2025-05-29 00:33:03.881881 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:33:03.882745 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-05-29 00:33:03.886427 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-05-29 00:33:03.886454 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.inotify.max_user_instances', 'value': 1024}) 2025-05-29 00:33:03.886466 | orchestrator | 2025-05-29 00:33:03.886478 | orchestrator | TASK [osism.commons.limits : Include limits tasks] ***************************** 2025-05-29 00:33:03.886490 | orchestrator | Thursday 29 May 2025 00:33:03 +0000 (0:00:01.569) 0:03:27.522 ********** 2025-05-29 00:33:03.966376 | orchestrator | skipping: [testbed-manager] 2025-05-29 00:33:03.996159 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:33:04.016774 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:33:04.043194 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:33:04.168304 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:33:04.169271 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:33:04.170796 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:33:04.171492 | orchestrator | 2025-05-29 00:33:04.172879 | orchestrator | TASK [osism.commons.services : Populate service facts] ************************* 2025-05-29 00:33:04.173730 | orchestrator | Thursday 29 May 2025 00:33:04 +0000 (0:00:00.289) 0:03:27.812 ********** 2025-05-29 00:33:09.964935 | orchestrator | ok: [testbed-manager] 2025-05-29 00:33:09.965682 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:33:09.967505 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:33:09.968229 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:33:09.968528 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:33:09.969390 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:33:09.969881 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:33:09.970732 | orchestrator | 2025-05-29 00:33:09.971326 | orchestrator | TASK [osism.commons.services : Check services] ********************************* 2025-05-29 00:33:09.972102 | orchestrator | Thursday 29 May 2025 00:33:09 +0000 (0:00:05.797) 0:03:33.609 ********** 2025-05-29 00:33:10.047393 | orchestrator | skipping: [testbed-manager] => (item=nscd)  2025-05-29 00:33:10.048491 | orchestrator | skipping: [testbed-node-3] => (item=nscd)  2025-05-29 00:33:10.082586 | orchestrator | skipping: [testbed-manager] 2025-05-29 00:33:10.082943 | orchestrator | skipping: [testbed-node-4] => (item=nscd)  2025-05-29 00:33:10.124714 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:33:10.124886 | orchestrator | skipping: [testbed-node-5] => (item=nscd)  2025-05-29 00:33:10.156558 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:33:10.192679 | orchestrator | skipping: [testbed-node-0] => (item=nscd)  2025-05-29 00:33:10.192869 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:33:10.263481 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:33:10.264285 | orchestrator | skipping: [testbed-node-1] => (item=nscd)  2025-05-29 00:33:10.264729 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:33:10.265524 | orchestrator | skipping: [testbed-node-2] => (item=nscd)  2025-05-29 00:33:10.265740 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:33:10.266432 | orchestrator | 2025-05-29 00:33:10.266859 | orchestrator | TASK [osism.commons.services : Start/enable required services] ***************** 2025-05-29 00:33:10.267244 | orchestrator | Thursday 29 May 2025 00:33:10 +0000 (0:00:00.297) 0:03:33.906 ********** 2025-05-29 00:33:11.241679 | orchestrator | ok: [testbed-manager] => (item=cron) 2025-05-29 00:33:11.241786 | orchestrator | ok: [testbed-node-3] => (item=cron) 2025-05-29 00:33:11.242114 | orchestrator | ok: [testbed-node-4] => (item=cron) 2025-05-29 00:33:11.245340 | orchestrator | ok: [testbed-node-5] => (item=cron) 2025-05-29 00:33:11.245725 | orchestrator | ok: [testbed-node-0] => (item=cron) 2025-05-29 00:33:11.246731 | orchestrator | ok: [testbed-node-1] => (item=cron) 2025-05-29 00:33:11.246888 | orchestrator | ok: [testbed-node-2] => (item=cron) 2025-05-29 00:33:11.247380 | orchestrator | 2025-05-29 00:33:11.247672 | orchestrator | TASK [osism.commons.motd : Include distribution specific configure tasks] ****** 2025-05-29 00:33:11.248113 | orchestrator | Thursday 29 May 2025 00:33:11 +0000 (0:00:00.978) 0:03:34.884 ********** 2025-05-29 00:33:11.620976 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/motd/tasks/configure-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-29 00:33:11.621240 | orchestrator | 2025-05-29 00:33:11.621962 | orchestrator | TASK [osism.commons.motd : Remove update-motd package] ************************* 2025-05-29 00:33:11.623285 | orchestrator | Thursday 29 May 2025 00:33:11 +0000 (0:00:00.379) 0:03:35.264 ********** 2025-05-29 00:33:12.891754 | orchestrator | ok: [testbed-manager] 2025-05-29 00:33:12.892488 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:33:12.894792 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:33:12.895893 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:33:12.896707 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:33:12.897251 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:33:12.897890 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:33:12.898514 | orchestrator | 2025-05-29 00:33:12.899083 | orchestrator | TASK [osism.commons.motd : Check if /etc/default/motd-news exists] ************* 2025-05-29 00:33:12.899862 | orchestrator | Thursday 29 May 2025 00:33:12 +0000 (0:00:01.269) 0:03:36.534 ********** 2025-05-29 00:33:13.454718 | orchestrator | ok: [testbed-manager] 2025-05-29 00:33:13.455474 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:33:13.457001 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:33:13.458129 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:33:13.458947 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:33:13.459650 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:33:13.460719 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:33:13.461222 | orchestrator | 2025-05-29 00:33:13.461879 | orchestrator | TASK [osism.commons.motd : Disable the dynamic motd-news service] ************** 2025-05-29 00:33:13.462606 | orchestrator | Thursday 29 May 2025 00:33:13 +0000 (0:00:00.565) 0:03:37.099 ********** 2025-05-29 00:33:14.101903 | orchestrator | changed: [testbed-manager] 2025-05-29 00:33:14.102335 | orchestrator | changed: [testbed-node-3] 2025-05-29 00:33:14.104867 | orchestrator | changed: [testbed-node-4] 2025-05-29 00:33:14.104912 | orchestrator | changed: [testbed-node-5] 2025-05-29 00:33:14.106201 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:33:14.106248 | orchestrator | changed: [testbed-node-1] 2025-05-29 00:33:14.107057 | orchestrator | changed: [testbed-node-2] 2025-05-29 00:33:14.107547 | orchestrator | 2025-05-29 00:33:14.109408 | orchestrator | TASK [osism.commons.motd : Get all configuration files in /etc/pam.d] ********** 2025-05-29 00:33:14.109483 | orchestrator | Thursday 29 May 2025 00:33:14 +0000 (0:00:00.644) 0:03:37.743 ********** 2025-05-29 00:33:14.681888 | orchestrator | ok: [testbed-manager] 2025-05-29 00:33:14.682071 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:33:14.683414 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:33:14.687352 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:33:14.687868 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:33:14.688256 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:33:14.689004 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:33:14.689204 | orchestrator | 2025-05-29 00:33:14.689931 | orchestrator | TASK [osism.commons.motd : Remove pam_motd.so rule] **************************** 2025-05-29 00:33:14.689971 | orchestrator | Thursday 29 May 2025 00:33:14 +0000 (0:00:00.581) 0:03:38.325 ********** 2025-05-29 00:33:15.667615 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1748477034.0937665, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-29 00:33:15.668840 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1748477085.1732216, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-29 00:33:15.668900 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1748477073.7682571, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-29 00:33:15.669668 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1748477200.7399766, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-29 00:33:15.670285 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1748477354.122325, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-29 00:33:15.670940 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1748477131.1152906, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-29 00:33:15.671965 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/sshd', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 2133, 'inode': 591, 'dev': 2049, 'nlink': 1, 'atime': 1748477121.6886928, 'mtime': 1723170802.0, 'ctime': 1728031288.6324632, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-29 00:33:15.672917 | orchestrator | changed: [testbed-manager] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1748477081.1621025, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-29 00:33:15.672943 | orchestrator | changed: [testbed-node-1] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1748477032.7168052, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-29 00:33:15.673404 | orchestrator | changed: [testbed-node-3] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1748477116.1598, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-29 00:33:15.673670 | orchestrator | changed: [testbed-node-0] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1748477255.1590834, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-29 00:33:15.674095 | orchestrator | changed: [testbed-node-4] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1748477003.676514, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-29 00:33:15.674705 | orchestrator | changed: [testbed-node-5] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1748476992.9414308, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-29 00:33:15.675063 | orchestrator | changed: [testbed-node-2] => (item={'path': '/etc/pam.d/login', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 4118, 'inode': 577, 'dev': 2049, 'nlink': 1, 'atime': 1748477040.9190488, 'mtime': 1712646062.0, 'ctime': 1728031288.6314633, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-29 00:33:15.675499 | orchestrator | 2025-05-29 00:33:15.675873 | orchestrator | TASK [osism.commons.motd : Copy motd file] ************************************* 2025-05-29 00:33:15.676367 | orchestrator | Thursday 29 May 2025 00:33:15 +0000 (0:00:00.986) 0:03:39.311 ********** 2025-05-29 00:33:16.811403 | orchestrator | changed: [testbed-manager] 2025-05-29 00:33:16.812301 | orchestrator | changed: [testbed-node-4] 2025-05-29 00:33:16.813138 | orchestrator | changed: [testbed-node-5] 2025-05-29 00:33:16.813854 | orchestrator | changed: [testbed-node-3] 2025-05-29 00:33:16.814772 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:33:16.815572 | orchestrator | changed: [testbed-node-1] 2025-05-29 00:33:16.816378 | orchestrator | changed: [testbed-node-2] 2025-05-29 00:33:16.817262 | orchestrator | 2025-05-29 00:33:16.817992 | orchestrator | TASK [osism.commons.motd : Copy issue file] ************************************ 2025-05-29 00:33:16.818961 | orchestrator | Thursday 29 May 2025 00:33:16 +0000 (0:00:01.143) 0:03:40.454 ********** 2025-05-29 00:33:18.061286 | orchestrator | changed: [testbed-manager] 2025-05-29 00:33:18.061428 | orchestrator | changed: [testbed-node-3] 2025-05-29 00:33:18.061552 | orchestrator | changed: [testbed-node-4] 2025-05-29 00:33:18.061937 | orchestrator | changed: [testbed-node-5] 2025-05-29 00:33:18.062381 | orchestrator | changed: [testbed-node-1] 2025-05-29 00:33:18.063232 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:33:18.063485 | orchestrator | changed: [testbed-node-2] 2025-05-29 00:33:18.063558 | orchestrator | 2025-05-29 00:33:18.063839 | orchestrator | TASK [osism.commons.motd : Configure SSH to print the motd] ******************** 2025-05-29 00:33:18.065365 | orchestrator | Thursday 29 May 2025 00:33:18 +0000 (0:00:01.249) 0:03:41.704 ********** 2025-05-29 00:33:18.132531 | orchestrator | skipping: [testbed-manager] 2025-05-29 00:33:18.190848 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:33:18.226287 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:33:18.258271 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:33:18.289802 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:33:18.349127 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:33:18.349354 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:33:18.350106 | orchestrator | 2025-05-29 00:33:18.351920 | orchestrator | TASK [osism.commons.motd : Configure SSH to not print the motd] **************** 2025-05-29 00:33:18.352312 | orchestrator | Thursday 29 May 2025 00:33:18 +0000 (0:00:00.289) 0:03:41.993 ********** 2025-05-29 00:33:19.071614 | orchestrator | ok: [testbed-manager] 2025-05-29 00:33:19.071764 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:33:19.074248 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:33:19.074283 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:33:19.074383 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:33:19.074930 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:33:19.076056 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:33:19.076581 | orchestrator | 2025-05-29 00:33:19.077890 | orchestrator | TASK [osism.services.rng : Include distribution specific install tasks] ******** 2025-05-29 00:33:19.078625 | orchestrator | Thursday 29 May 2025 00:33:19 +0000 (0:00:00.719) 0:03:42.713 ********** 2025-05-29 00:33:19.535526 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/rng/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-29 00:33:19.536259 | orchestrator | 2025-05-29 00:33:19.541064 | orchestrator | TASK [osism.services.rng : Install rng package] ******************************** 2025-05-29 00:33:19.541180 | orchestrator | Thursday 29 May 2025 00:33:19 +0000 (0:00:00.466) 0:03:43.179 ********** 2025-05-29 00:33:27.041902 | orchestrator | ok: [testbed-manager] 2025-05-29 00:33:27.043933 | orchestrator | changed: [testbed-node-4] 2025-05-29 00:33:27.043975 | orchestrator | changed: [testbed-node-1] 2025-05-29 00:33:27.045336 | orchestrator | changed: [testbed-node-2] 2025-05-29 00:33:27.046177 | orchestrator | changed: [testbed-node-5] 2025-05-29 00:33:27.047256 | orchestrator | changed: [testbed-node-3] 2025-05-29 00:33:27.048038 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:33:27.048658 | orchestrator | 2025-05-29 00:33:27.049484 | orchestrator | TASK [osism.services.rng : Remove haveged package] ***************************** 2025-05-29 00:33:27.050276 | orchestrator | Thursday 29 May 2025 00:33:27 +0000 (0:00:07.504) 0:03:50.684 ********** 2025-05-29 00:33:28.167012 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:33:28.167742 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:33:28.170341 | orchestrator | ok: [testbed-manager] 2025-05-29 00:33:28.170373 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:33:28.170386 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:33:28.170939 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:33:28.171832 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:33:28.172780 | orchestrator | 2025-05-29 00:33:28.173445 | orchestrator | TASK [osism.services.rng : Manage rng service] ********************************* 2025-05-29 00:33:28.173765 | orchestrator | Thursday 29 May 2025 00:33:28 +0000 (0:00:01.125) 0:03:51.810 ********** 2025-05-29 00:33:29.212440 | orchestrator | ok: [testbed-manager] 2025-05-29 00:33:29.213018 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:33:29.213468 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:33:29.215396 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:33:29.216807 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:33:29.217472 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:33:29.218126 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:33:29.219149 | orchestrator | 2025-05-29 00:33:29.219928 | orchestrator | TASK [osism.services.smartd : Include distribution specific install tasks] ***** 2025-05-29 00:33:29.220472 | orchestrator | Thursday 29 May 2025 00:33:29 +0000 (0:00:01.043) 0:03:52.854 ********** 2025-05-29 00:33:29.633209 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/smartd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-29 00:33:29.633613 | orchestrator | 2025-05-29 00:33:29.633684 | orchestrator | TASK [osism.services.smartd : Install smartmontools package] ******************* 2025-05-29 00:33:29.634596 | orchestrator | Thursday 29 May 2025 00:33:29 +0000 (0:00:00.420) 0:03:53.275 ********** 2025-05-29 00:33:38.123797 | orchestrator | changed: [testbed-node-4] 2025-05-29 00:33:38.124007 | orchestrator | changed: [testbed-node-1] 2025-05-29 00:33:38.124030 | orchestrator | changed: [testbed-node-2] 2025-05-29 00:33:38.124091 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:33:38.124222 | orchestrator | changed: [testbed-node-5] 2025-05-29 00:33:38.124837 | orchestrator | changed: [testbed-node-3] 2025-05-29 00:33:38.127563 | orchestrator | changed: [testbed-manager] 2025-05-29 00:33:38.127643 | orchestrator | 2025-05-29 00:33:38.127667 | orchestrator | TASK [osism.services.smartd : Create /var/log/smartd directory] **************** 2025-05-29 00:33:38.127680 | orchestrator | Thursday 29 May 2025 00:33:38 +0000 (0:00:08.491) 0:04:01.766 ********** 2025-05-29 00:33:38.871340 | orchestrator | changed: [testbed-manager] 2025-05-29 00:33:38.871487 | orchestrator | changed: [testbed-node-3] 2025-05-29 00:33:38.872627 | orchestrator | changed: [testbed-node-4] 2025-05-29 00:33:38.873311 | orchestrator | changed: [testbed-node-5] 2025-05-29 00:33:38.874550 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:33:38.875170 | orchestrator | changed: [testbed-node-1] 2025-05-29 00:33:38.876067 | orchestrator | changed: [testbed-node-2] 2025-05-29 00:33:38.876809 | orchestrator | 2025-05-29 00:33:38.878135 | orchestrator | TASK [osism.services.smartd : Copy smartmontools configuration file] *********** 2025-05-29 00:33:38.878571 | orchestrator | Thursday 29 May 2025 00:33:38 +0000 (0:00:00.748) 0:04:02.515 ********** 2025-05-29 00:33:39.969188 | orchestrator | changed: [testbed-manager] 2025-05-29 00:33:39.969750 | orchestrator | changed: [testbed-node-4] 2025-05-29 00:33:39.973237 | orchestrator | changed: [testbed-node-3] 2025-05-29 00:33:39.973971 | orchestrator | changed: [testbed-node-5] 2025-05-29 00:33:39.974706 | orchestrator | changed: [testbed-node-1] 2025-05-29 00:33:39.977254 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:33:39.978282 | orchestrator | changed: [testbed-node-2] 2025-05-29 00:33:39.979609 | orchestrator | 2025-05-29 00:33:39.979789 | orchestrator | TASK [osism.services.smartd : Manage smartd service] *************************** 2025-05-29 00:33:39.980522 | orchestrator | Thursday 29 May 2025 00:33:39 +0000 (0:00:01.096) 0:04:03.611 ********** 2025-05-29 00:33:41.000447 | orchestrator | changed: [testbed-manager] 2025-05-29 00:33:41.001692 | orchestrator | changed: [testbed-node-4] 2025-05-29 00:33:41.002156 | orchestrator | changed: [testbed-node-3] 2025-05-29 00:33:41.004819 | orchestrator | changed: [testbed-node-5] 2025-05-29 00:33:41.004860 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:33:41.004878 | orchestrator | changed: [testbed-node-2] 2025-05-29 00:33:41.004896 | orchestrator | changed: [testbed-node-1] 2025-05-29 00:33:41.008508 | orchestrator | 2025-05-29 00:33:41.008792 | orchestrator | TASK [osism.commons.cleanup : Gather variables for each operating system] ****** 2025-05-29 00:33:41.009242 | orchestrator | Thursday 29 May 2025 00:33:40 +0000 (0:00:01.031) 0:04:04.642 ********** 2025-05-29 00:33:41.075619 | orchestrator | ok: [testbed-manager] 2025-05-29 00:33:41.105675 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:33:41.178425 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:33:41.216903 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:33:41.281920 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:33:41.282727 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:33:41.282927 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:33:41.283690 | orchestrator | 2025-05-29 00:33:41.286934 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_packages_distribution variable to default value] *** 2025-05-29 00:33:41.286977 | orchestrator | Thursday 29 May 2025 00:33:41 +0000 (0:00:00.284) 0:04:04.927 ********** 2025-05-29 00:33:41.379211 | orchestrator | ok: [testbed-manager] 2025-05-29 00:33:41.413293 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:33:41.486214 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:33:41.526325 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:33:41.605631 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:33:41.608009 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:33:41.609242 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:33:41.609732 | orchestrator | 2025-05-29 00:33:41.611018 | orchestrator | TASK [osism.commons.cleanup : Set cleanup_services_distribution variable to default value] *** 2025-05-29 00:33:41.611974 | orchestrator | Thursday 29 May 2025 00:33:41 +0000 (0:00:00.323) 0:04:05.250 ********** 2025-05-29 00:33:41.705672 | orchestrator | ok: [testbed-manager] 2025-05-29 00:33:41.740270 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:33:41.773430 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:33:41.821241 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:33:41.886009 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:33:41.887769 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:33:41.888936 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:33:41.890105 | orchestrator | 2025-05-29 00:33:41.890806 | orchestrator | TASK [osism.commons.cleanup : Populate service facts] ************************** 2025-05-29 00:33:41.891714 | orchestrator | Thursday 29 May 2025 00:33:41 +0000 (0:00:00.279) 0:04:05.530 ********** 2025-05-29 00:33:47.695815 | orchestrator | ok: [testbed-manager] 2025-05-29 00:33:47.696097 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:33:47.696694 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:33:47.696992 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:33:47.697467 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:33:47.698272 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:33:47.698297 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:33:47.700352 | orchestrator | 2025-05-29 00:33:47.700409 | orchestrator | TASK [osism.commons.cleanup : Include distribution specific timer tasks] ******* 2025-05-29 00:33:47.700424 | orchestrator | Thursday 29 May 2025 00:33:47 +0000 (0:00:05.809) 0:04:11.340 ********** 2025-05-29 00:33:48.089726 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/timers-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-29 00:33:48.093187 | orchestrator | 2025-05-29 00:33:48.093227 | orchestrator | TASK [osism.commons.cleanup : Disable apt-daily timers] ************************ 2025-05-29 00:33:48.093241 | orchestrator | Thursday 29 May 2025 00:33:48 +0000 (0:00:00.391) 0:04:11.732 ********** 2025-05-29 00:33:48.171913 | orchestrator | skipping: [testbed-manager] => (item=apt-daily-upgrade)  2025-05-29 00:33:48.172873 | orchestrator | skipping: [testbed-manager] => (item=apt-daily)  2025-05-29 00:33:48.175914 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily-upgrade)  2025-05-29 00:33:48.175944 | orchestrator | skipping: [testbed-node-3] => (item=apt-daily)  2025-05-29 00:33:48.226440 | orchestrator | skipping: [testbed-manager] 2025-05-29 00:33:48.227232 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily-upgrade)  2025-05-29 00:33:48.227936 | orchestrator | skipping: [testbed-node-4] => (item=apt-daily)  2025-05-29 00:33:48.271119 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:33:48.271676 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily-upgrade)  2025-05-29 00:33:48.271823 | orchestrator | skipping: [testbed-node-5] => (item=apt-daily)  2025-05-29 00:33:48.307274 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:33:48.346253 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:33:48.346348 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily-upgrade)  2025-05-29 00:33:48.346448 | orchestrator | skipping: [testbed-node-0] => (item=apt-daily)  2025-05-29 00:33:48.346506 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily-upgrade)  2025-05-29 00:33:48.433260 | orchestrator | skipping: [testbed-node-1] => (item=apt-daily)  2025-05-29 00:33:48.433828 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:33:48.438720 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:33:48.438784 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily-upgrade)  2025-05-29 00:33:48.438798 | orchestrator | skipping: [testbed-node-2] => (item=apt-daily)  2025-05-29 00:33:48.438810 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:33:48.438821 | orchestrator | 2025-05-29 00:33:48.439283 | orchestrator | TASK [osism.commons.cleanup : Include service tasks] *************************** 2025-05-29 00:33:48.440033 | orchestrator | Thursday 29 May 2025 00:33:48 +0000 (0:00:00.344) 0:04:12.076 ********** 2025-05-29 00:33:48.816414 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/services-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-29 00:33:48.816527 | orchestrator | 2025-05-29 00:33:48.819026 | orchestrator | TASK [osism.commons.cleanup : Cleanup services] ******************************** 2025-05-29 00:33:48.820665 | orchestrator | Thursday 29 May 2025 00:33:48 +0000 (0:00:00.383) 0:04:12.460 ********** 2025-05-29 00:33:48.897029 | orchestrator | skipping: [testbed-manager] => (item=ModemManager.service)  2025-05-29 00:33:48.898171 | orchestrator | skipping: [testbed-node-3] => (item=ModemManager.service)  2025-05-29 00:33:48.934920 | orchestrator | skipping: [testbed-manager] 2025-05-29 00:33:48.976792 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:33:48.977687 | orchestrator | skipping: [testbed-node-4] => (item=ModemManager.service)  2025-05-29 00:33:48.978252 | orchestrator | skipping: [testbed-node-5] => (item=ModemManager.service)  2025-05-29 00:33:49.028822 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:33:49.029175 | orchestrator | skipping: [testbed-node-0] => (item=ModemManager.service)  2025-05-29 00:33:49.066176 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:33:49.067045 | orchestrator | skipping: [testbed-node-1] => (item=ModemManager.service)  2025-05-29 00:33:49.139359 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:33:49.139459 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:33:49.142645 | orchestrator | skipping: [testbed-node-2] => (item=ModemManager.service)  2025-05-29 00:33:49.142672 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:33:49.142684 | orchestrator | 2025-05-29 00:33:49.142760 | orchestrator | TASK [osism.commons.cleanup : Include packages tasks] ************************** 2025-05-29 00:33:49.143519 | orchestrator | Thursday 29 May 2025 00:33:49 +0000 (0:00:00.322) 0:04:12.782 ********** 2025-05-29 00:33:49.526519 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/packages-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-29 00:33:49.526620 | orchestrator | 2025-05-29 00:33:49.527112 | orchestrator | TASK [osism.commons.cleanup : Cleanup installed packages] ********************** 2025-05-29 00:33:49.527849 | orchestrator | Thursday 29 May 2025 00:33:49 +0000 (0:00:00.385) 0:04:13.168 ********** 2025-05-29 00:34:22.648747 | orchestrator | changed: [testbed-manager] 2025-05-29 00:34:22.648927 | orchestrator | changed: [testbed-node-4] 2025-05-29 00:34:22.648947 | orchestrator | changed: [testbed-node-1] 2025-05-29 00:34:22.649094 | orchestrator | changed: [testbed-node-2] 2025-05-29 00:34:22.649113 | orchestrator | changed: [testbed-node-5] 2025-05-29 00:34:22.649124 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:34:22.649135 | orchestrator | changed: [testbed-node-3] 2025-05-29 00:34:22.649172 | orchestrator | 2025-05-29 00:34:22.649225 | orchestrator | TASK [osism.commons.cleanup : Remove cloudinit package] ************************ 2025-05-29 00:34:22.649822 | orchestrator | Thursday 29 May 2025 00:34:22 +0000 (0:00:33.115) 0:04:46.283 ********** 2025-05-29 00:34:30.694187 | orchestrator | changed: [testbed-manager] 2025-05-29 00:34:30.694304 | orchestrator | changed: [testbed-node-4] 2025-05-29 00:34:30.694320 | orchestrator | changed: [testbed-node-2] 2025-05-29 00:34:30.696154 | orchestrator | changed: [testbed-node-1] 2025-05-29 00:34:30.696184 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:34:30.697980 | orchestrator | changed: [testbed-node-5] 2025-05-29 00:34:30.698823 | orchestrator | changed: [testbed-node-3] 2025-05-29 00:34:30.699644 | orchestrator | 2025-05-29 00:34:30.699849 | orchestrator | TASK [osism.commons.cleanup : Uninstall unattended-upgrades package] *********** 2025-05-29 00:34:30.700512 | orchestrator | Thursday 29 May 2025 00:34:30 +0000 (0:00:08.053) 0:04:54.336 ********** 2025-05-29 00:34:37.978390 | orchestrator | changed: [testbed-node-4] 2025-05-29 00:34:37.978512 | orchestrator | changed: [testbed-manager] 2025-05-29 00:34:37.978594 | orchestrator | changed: [testbed-node-2] 2025-05-29 00:34:37.980089 | orchestrator | changed: [testbed-node-1] 2025-05-29 00:34:37.981654 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:34:37.981993 | orchestrator | changed: [testbed-node-5] 2025-05-29 00:34:37.982358 | orchestrator | changed: [testbed-node-3] 2025-05-29 00:34:37.982871 | orchestrator | 2025-05-29 00:34:37.983262 | orchestrator | TASK [osism.commons.cleanup : Remove useless packages from the cache] ********** 2025-05-29 00:34:37.983883 | orchestrator | Thursday 29 May 2025 00:34:37 +0000 (0:00:07.283) 0:05:01.620 ********** 2025-05-29 00:34:39.549184 | orchestrator | ok: [testbed-manager] 2025-05-29 00:34:39.549613 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:34:39.549748 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:34:39.550640 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:34:39.550935 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:34:39.551599 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:34:39.552130 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:34:39.552735 | orchestrator | 2025-05-29 00:34:39.553642 | orchestrator | TASK [osism.commons.cleanup : Remove dependencies that are no longer required] *** 2025-05-29 00:34:39.555253 | orchestrator | Thursday 29 May 2025 00:34:39 +0000 (0:00:01.570) 0:05:03.191 ********** 2025-05-29 00:34:45.215727 | orchestrator | changed: [testbed-node-4] 2025-05-29 00:34:45.215908 | orchestrator | changed: [testbed-manager] 2025-05-29 00:34:45.216729 | orchestrator | changed: [testbed-node-1] 2025-05-29 00:34:45.217206 | orchestrator | changed: [testbed-node-3] 2025-05-29 00:34:45.217880 | orchestrator | changed: [testbed-node-5] 2025-05-29 00:34:45.218552 | orchestrator | changed: [testbed-node-2] 2025-05-29 00:34:45.219230 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:34:45.220597 | orchestrator | 2025-05-29 00:34:45.222182 | orchestrator | TASK [osism.commons.cleanup : Include cloudinit tasks] ************************* 2025-05-29 00:34:45.222395 | orchestrator | Thursday 29 May 2025 00:34:45 +0000 (0:00:05.667) 0:05:08.858 ********** 2025-05-29 00:34:45.598253 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/cleanup/tasks/cloudinit.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-29 00:34:45.598358 | orchestrator | 2025-05-29 00:34:45.600471 | orchestrator | TASK [osism.commons.cleanup : Remove cloud-init configuration directory] ******* 2025-05-29 00:34:45.601513 | orchestrator | Thursday 29 May 2025 00:34:45 +0000 (0:00:00.383) 0:05:09.242 ********** 2025-05-29 00:34:46.329062 | orchestrator | changed: [testbed-manager] 2025-05-29 00:34:46.331662 | orchestrator | changed: [testbed-node-3] 2025-05-29 00:34:46.331697 | orchestrator | changed: [testbed-node-4] 2025-05-29 00:34:46.331709 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:34:46.331905 | orchestrator | changed: [testbed-node-5] 2025-05-29 00:34:46.332469 | orchestrator | changed: [testbed-node-1] 2025-05-29 00:34:46.333720 | orchestrator | changed: [testbed-node-2] 2025-05-29 00:34:46.333919 | orchestrator | 2025-05-29 00:34:46.334296 | orchestrator | TASK [osism.commons.timezone : Install tzdata package] ************************* 2025-05-29 00:34:46.334884 | orchestrator | Thursday 29 May 2025 00:34:46 +0000 (0:00:00.730) 0:05:09.972 ********** 2025-05-29 00:34:47.895145 | orchestrator | ok: [testbed-manager] 2025-05-29 00:34:47.895784 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:34:47.897210 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:34:47.898600 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:34:47.898971 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:34:47.899597 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:34:47.900446 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:34:47.900793 | orchestrator | 2025-05-29 00:34:47.901418 | orchestrator | TASK [osism.commons.timezone : Set timezone to UTC] **************************** 2025-05-29 00:34:47.902525 | orchestrator | Thursday 29 May 2025 00:34:47 +0000 (0:00:01.566) 0:05:11.538 ********** 2025-05-29 00:34:48.627660 | orchestrator | changed: [testbed-node-3] 2025-05-29 00:34:48.630819 | orchestrator | changed: [testbed-node-4] 2025-05-29 00:34:48.630977 | orchestrator | changed: [testbed-node-5] 2025-05-29 00:34:48.630992 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:34:48.631068 | orchestrator | changed: [testbed-manager] 2025-05-29 00:34:48.631495 | orchestrator | changed: [testbed-node-1] 2025-05-29 00:34:48.632069 | orchestrator | changed: [testbed-node-2] 2025-05-29 00:34:48.632477 | orchestrator | 2025-05-29 00:34:48.633022 | orchestrator | TASK [osism.commons.timezone : Create /etc/adjtime file] *********************** 2025-05-29 00:34:48.633468 | orchestrator | Thursday 29 May 2025 00:34:48 +0000 (0:00:00.733) 0:05:12.272 ********** 2025-05-29 00:34:48.704052 | orchestrator | skipping: [testbed-manager] 2025-05-29 00:34:48.737381 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:34:48.804043 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:34:48.834848 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:34:48.900392 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:34:48.900598 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:34:48.901012 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:34:48.901840 | orchestrator | 2025-05-29 00:34:48.902380 | orchestrator | TASK [osism.commons.timezone : Ensure UTC in /etc/adjtime] ********************* 2025-05-29 00:34:48.902712 | orchestrator | Thursday 29 May 2025 00:34:48 +0000 (0:00:00.273) 0:05:12.545 ********** 2025-05-29 00:34:48.996531 | orchestrator | skipping: [testbed-manager] 2025-05-29 00:34:49.042240 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:34:49.080134 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:34:49.112631 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:34:49.285141 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:34:49.285529 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:34:49.286518 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:34:49.287124 | orchestrator | 2025-05-29 00:34:49.291329 | orchestrator | TASK [osism.services.docker : Gather variables for each operating system] ****** 2025-05-29 00:34:49.292008 | orchestrator | Thursday 29 May 2025 00:34:49 +0000 (0:00:00.384) 0:05:12.930 ********** 2025-05-29 00:34:49.392548 | orchestrator | ok: [testbed-manager] 2025-05-29 00:34:49.430603 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:34:49.481241 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:34:49.513426 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:34:49.581028 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:34:49.581596 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:34:49.582805 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:34:49.583651 | orchestrator | 2025-05-29 00:34:49.584308 | orchestrator | TASK [osism.services.docker : Set docker_version variable to default value] **** 2025-05-29 00:34:49.584873 | orchestrator | Thursday 29 May 2025 00:34:49 +0000 (0:00:00.296) 0:05:13.226 ********** 2025-05-29 00:34:49.646118 | orchestrator | skipping: [testbed-manager] 2025-05-29 00:34:49.676602 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:34:49.708533 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:34:49.740539 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:34:49.788209 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:34:49.856344 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:34:49.856586 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:34:49.856612 | orchestrator | 2025-05-29 00:34:49.856867 | orchestrator | TASK [osism.services.docker : Set docker_cli_version variable to default value] *** 2025-05-29 00:34:49.857081 | orchestrator | Thursday 29 May 2025 00:34:49 +0000 (0:00:00.275) 0:05:13.502 ********** 2025-05-29 00:34:49.967647 | orchestrator | ok: [testbed-manager] 2025-05-29 00:34:50.006538 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:34:50.037955 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:34:50.075690 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:34:50.152465 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:34:50.152582 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:34:50.152703 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:34:50.153653 | orchestrator | 2025-05-29 00:34:50.153686 | orchestrator | TASK [osism.services.docker : Include block storage tasks] ********************* 2025-05-29 00:34:50.153794 | orchestrator | Thursday 29 May 2025 00:34:50 +0000 (0:00:00.293) 0:05:13.795 ********** 2025-05-29 00:34:50.256335 | orchestrator | skipping: [testbed-manager] 2025-05-29 00:34:50.292635 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:34:50.329302 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:34:50.356458 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:34:50.421065 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:34:50.421166 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:34:50.421180 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:34:50.421281 | orchestrator | 2025-05-29 00:34:50.422112 | orchestrator | TASK [osism.services.docker : Include zram storage tasks] ********************** 2025-05-29 00:34:50.423208 | orchestrator | Thursday 29 May 2025 00:34:50 +0000 (0:00:00.265) 0:05:14.061 ********** 2025-05-29 00:34:50.524209 | orchestrator | skipping: [testbed-manager] 2025-05-29 00:34:50.560774 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:34:50.600966 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:34:50.632049 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:34:50.664773 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:34:50.723235 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:34:50.723670 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:34:50.725034 | orchestrator | 2025-05-29 00:34:50.726239 | orchestrator | TASK [osism.services.docker : Include docker install tasks] ******************** 2025-05-29 00:34:50.727209 | orchestrator | Thursday 29 May 2025 00:34:50 +0000 (0:00:00.305) 0:05:14.366 ********** 2025-05-29 00:34:51.215936 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/install-docker-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-29 00:34:51.216102 | orchestrator | 2025-05-29 00:34:51.216902 | orchestrator | TASK [osism.services.docker : Remove old architecture-dependent repository] **** 2025-05-29 00:34:51.217787 | orchestrator | Thursday 29 May 2025 00:34:51 +0000 (0:00:00.493) 0:05:14.860 ********** 2025-05-29 00:34:52.038642 | orchestrator | ok: [testbed-manager] 2025-05-29 00:34:52.038797 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:34:52.038887 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:34:52.040229 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:34:52.040864 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:34:52.042283 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:34:52.042651 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:34:52.043655 | orchestrator | 2025-05-29 00:34:52.044241 | orchestrator | TASK [osism.services.docker : Gather package facts] **************************** 2025-05-29 00:34:52.046587 | orchestrator | Thursday 29 May 2025 00:34:52 +0000 (0:00:00.819) 0:05:15.679 ********** 2025-05-29 00:34:54.756131 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:34:54.756304 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:34:54.757171 | orchestrator | ok: [testbed-manager] 2025-05-29 00:34:54.758273 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:34:54.759474 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:34:54.759517 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:34:54.760077 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:34:54.760662 | orchestrator | 2025-05-29 00:34:54.761645 | orchestrator | TASK [osism.services.docker : Check whether packages are installed that should not be installed] *** 2025-05-29 00:34:54.762005 | orchestrator | Thursday 29 May 2025 00:34:54 +0000 (0:00:02.721) 0:05:18.400 ********** 2025-05-29 00:34:54.829695 | orchestrator | skipping: [testbed-manager] => (item=containerd)  2025-05-29 00:34:54.829847 | orchestrator | skipping: [testbed-manager] => (item=docker.io)  2025-05-29 00:34:54.914197 | orchestrator | skipping: [testbed-manager] => (item=docker-engine)  2025-05-29 00:34:54.914292 | orchestrator | skipping: [testbed-node-3] => (item=containerd)  2025-05-29 00:34:54.914500 | orchestrator | skipping: [testbed-node-3] => (item=docker.io)  2025-05-29 00:34:54.914995 | orchestrator | skipping: [testbed-node-3] => (item=docker-engine)  2025-05-29 00:34:54.986187 | orchestrator | skipping: [testbed-manager] 2025-05-29 00:34:54.986287 | orchestrator | skipping: [testbed-node-4] => (item=containerd)  2025-05-29 00:34:54.986633 | orchestrator | skipping: [testbed-node-4] => (item=docker.io)  2025-05-29 00:34:55.062877 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:34:55.062977 | orchestrator | skipping: [testbed-node-4] => (item=docker-engine)  2025-05-29 00:34:55.063242 | orchestrator | skipping: [testbed-node-5] => (item=containerd)  2025-05-29 00:34:55.063672 | orchestrator | skipping: [testbed-node-5] => (item=docker.io)  2025-05-29 00:34:55.064117 | orchestrator | skipping: [testbed-node-5] => (item=docker-engine)  2025-05-29 00:34:55.133323 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:34:55.133791 | orchestrator | skipping: [testbed-node-0] => (item=containerd)  2025-05-29 00:34:55.134533 | orchestrator | skipping: [testbed-node-0] => (item=docker.io)  2025-05-29 00:34:55.135116 | orchestrator | skipping: [testbed-node-0] => (item=docker-engine)  2025-05-29 00:34:55.205278 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:34:55.207106 | orchestrator | skipping: [testbed-node-1] => (item=containerd)  2025-05-29 00:34:55.207478 | orchestrator | skipping: [testbed-node-1] => (item=docker.io)  2025-05-29 00:34:55.344036 | orchestrator | skipping: [testbed-node-1] => (item=docker-engine)  2025-05-29 00:34:55.344181 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:34:55.344736 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:34:55.345632 | orchestrator | skipping: [testbed-node-2] => (item=containerd)  2025-05-29 00:34:55.346826 | orchestrator | skipping: [testbed-node-2] => (item=docker.io)  2025-05-29 00:34:55.347138 | orchestrator | skipping: [testbed-node-2] => (item=docker-engine)  2025-05-29 00:34:55.347527 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:34:55.348259 | orchestrator | 2025-05-29 00:34:55.348595 | orchestrator | TASK [osism.services.docker : Install apt-transport-https package] ************* 2025-05-29 00:34:55.349401 | orchestrator | Thursday 29 May 2025 00:34:55 +0000 (0:00:00.588) 0:05:18.989 ********** 2025-05-29 00:35:06.749106 | orchestrator | ok: [testbed-manager] 2025-05-29 00:35:06.749230 | orchestrator | changed: [testbed-node-1] 2025-05-29 00:35:06.749276 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:35:06.749289 | orchestrator | changed: [testbed-node-4] 2025-05-29 00:35:06.749300 | orchestrator | changed: [testbed-node-3] 2025-05-29 00:35:06.749311 | orchestrator | changed: [testbed-node-2] 2025-05-29 00:35:06.749321 | orchestrator | changed: [testbed-node-5] 2025-05-29 00:35:06.749332 | orchestrator | 2025-05-29 00:35:06.749344 | orchestrator | TASK [osism.services.docker : Add repository gpg key] ************************** 2025-05-29 00:35:06.749357 | orchestrator | Thursday 29 May 2025 00:35:06 +0000 (0:00:11.396) 0:05:30.385 ********** 2025-05-29 00:35:07.871419 | orchestrator | changed: [testbed-node-4] 2025-05-29 00:35:07.872219 | orchestrator | ok: [testbed-manager] 2025-05-29 00:35:07.873719 | orchestrator | changed: [testbed-node-5] 2025-05-29 00:35:07.874127 | orchestrator | changed: [testbed-node-3] 2025-05-29 00:35:07.875243 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:35:07.875704 | orchestrator | changed: [testbed-node-1] 2025-05-29 00:35:07.876539 | orchestrator | changed: [testbed-node-2] 2025-05-29 00:35:07.877191 | orchestrator | 2025-05-29 00:35:07.878330 | orchestrator | TASK [osism.services.docker : Add repository] ********************************** 2025-05-29 00:35:07.879692 | orchestrator | Thursday 29 May 2025 00:35:07 +0000 (0:00:01.127) 0:05:31.513 ********** 2025-05-29 00:35:15.537730 | orchestrator | ok: [testbed-manager] 2025-05-29 00:35:15.538645 | orchestrator | changed: [testbed-node-4] 2025-05-29 00:35:15.539487 | orchestrator | changed: [testbed-node-1] 2025-05-29 00:35:15.540905 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:35:15.541701 | orchestrator | changed: [testbed-node-2] 2025-05-29 00:35:15.542406 | orchestrator | changed: [testbed-node-5] 2025-05-29 00:35:15.542855 | orchestrator | changed: [testbed-node-3] 2025-05-29 00:35:15.543867 | orchestrator | 2025-05-29 00:35:15.544368 | orchestrator | TASK [osism.services.docker : Update package cache] **************************** 2025-05-29 00:35:15.545053 | orchestrator | Thursday 29 May 2025 00:35:15 +0000 (0:00:07.669) 0:05:39.182 ********** 2025-05-29 00:35:18.665327 | orchestrator | changed: [testbed-manager] 2025-05-29 00:35:18.665410 | orchestrator | changed: [testbed-node-4] 2025-05-29 00:35:18.665765 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:35:18.665919 | orchestrator | changed: [testbed-node-5] 2025-05-29 00:35:18.668680 | orchestrator | changed: [testbed-node-3] 2025-05-29 00:35:18.669241 | orchestrator | changed: [testbed-node-1] 2025-05-29 00:35:18.669258 | orchestrator | changed: [testbed-node-2] 2025-05-29 00:35:18.670082 | orchestrator | 2025-05-29 00:35:18.670780 | orchestrator | TASK [osism.services.docker : Pin docker package version] ********************** 2025-05-29 00:35:18.671084 | orchestrator | Thursday 29 May 2025 00:35:18 +0000 (0:00:03.124) 0:05:42.306 ********** 2025-05-29 00:35:19.992034 | orchestrator | ok: [testbed-manager] 2025-05-29 00:35:19.992154 | orchestrator | changed: [testbed-node-3] 2025-05-29 00:35:19.995918 | orchestrator | changed: [testbed-node-4] 2025-05-29 00:35:19.996942 | orchestrator | changed: [testbed-node-5] 2025-05-29 00:35:19.997411 | orchestrator | changed: [testbed-node-1] 2025-05-29 00:35:19.999274 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:35:20.000204 | orchestrator | changed: [testbed-node-2] 2025-05-29 00:35:20.000852 | orchestrator | 2025-05-29 00:35:20.001047 | orchestrator | TASK [osism.services.docker : Pin docker-cli package version] ****************** 2025-05-29 00:35:20.001990 | orchestrator | Thursday 29 May 2025 00:35:19 +0000 (0:00:01.326) 0:05:43.632 ********** 2025-05-29 00:35:21.631439 | orchestrator | ok: [testbed-manager] 2025-05-29 00:35:21.631891 | orchestrator | changed: [testbed-node-4] 2025-05-29 00:35:21.632855 | orchestrator | changed: [testbed-node-3] 2025-05-29 00:35:21.633441 | orchestrator | changed: [testbed-node-5] 2025-05-29 00:35:21.634365 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:35:21.634738 | orchestrator | changed: [testbed-node-1] 2025-05-29 00:35:21.635829 | orchestrator | changed: [testbed-node-2] 2025-05-29 00:35:21.636545 | orchestrator | 2025-05-29 00:35:21.637243 | orchestrator | TASK [osism.services.docker : Unlock containerd package] *********************** 2025-05-29 00:35:21.637684 | orchestrator | Thursday 29 May 2025 00:35:21 +0000 (0:00:01.639) 0:05:45.272 ********** 2025-05-29 00:35:21.859783 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:35:21.934343 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:35:22.002567 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:35:22.082477 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:35:22.228550 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:35:22.228758 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:35:22.229921 | orchestrator | changed: [testbed-manager] 2025-05-29 00:35:22.230938 | orchestrator | 2025-05-29 00:35:22.232159 | orchestrator | TASK [osism.services.docker : Install containerd package] ********************** 2025-05-29 00:35:22.233354 | orchestrator | Thursday 29 May 2025 00:35:22 +0000 (0:00:00.596) 0:05:45.869 ********** 2025-05-29 00:35:31.773925 | orchestrator | ok: [testbed-manager] 2025-05-29 00:35:31.774386 | orchestrator | changed: [testbed-node-4] 2025-05-29 00:35:31.774703 | orchestrator | changed: [testbed-node-1] 2025-05-29 00:35:31.775051 | orchestrator | changed: [testbed-node-5] 2025-05-29 00:35:31.775751 | orchestrator | changed: [testbed-node-3] 2025-05-29 00:35:31.776115 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:35:31.777056 | orchestrator | changed: [testbed-node-2] 2025-05-29 00:35:31.777402 | orchestrator | 2025-05-29 00:35:31.777880 | orchestrator | TASK [osism.services.docker : Lock containerd package] ************************* 2025-05-29 00:35:31.778274 | orchestrator | Thursday 29 May 2025 00:35:31 +0000 (0:00:09.548) 0:05:55.417 ********** 2025-05-29 00:35:32.698171 | orchestrator | changed: [testbed-manager] 2025-05-29 00:35:32.698872 | orchestrator | changed: [testbed-node-4] 2025-05-29 00:35:32.699241 | orchestrator | changed: [testbed-node-3] 2025-05-29 00:35:32.700686 | orchestrator | changed: [testbed-node-5] 2025-05-29 00:35:32.701789 | orchestrator | changed: [testbed-node-1] 2025-05-29 00:35:32.702325 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:35:32.703069 | orchestrator | changed: [testbed-node-2] 2025-05-29 00:35:32.703844 | orchestrator | 2025-05-29 00:35:32.704843 | orchestrator | TASK [osism.services.docker : Install docker-cli package] ********************** 2025-05-29 00:35:32.705308 | orchestrator | Thursday 29 May 2025 00:35:32 +0000 (0:00:00.925) 0:05:56.343 ********** 2025-05-29 00:35:44.977683 | orchestrator | ok: [testbed-manager] 2025-05-29 00:35:44.977807 | orchestrator | changed: [testbed-node-4] 2025-05-29 00:35:44.977824 | orchestrator | changed: [testbed-node-1] 2025-05-29 00:35:44.978071 | orchestrator | changed: [testbed-node-5] 2025-05-29 00:35:44.978091 | orchestrator | changed: [testbed-node-2] 2025-05-29 00:35:44.978101 | orchestrator | changed: [testbed-node-3] 2025-05-29 00:35:44.978111 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:35:44.978121 | orchestrator | 2025-05-29 00:35:44.978132 | orchestrator | TASK [osism.services.docker : Install docker package] ************************** 2025-05-29 00:35:44.978151 | orchestrator | Thursday 29 May 2025 00:35:44 +0000 (0:00:12.271) 0:06:08.615 ********** 2025-05-29 00:35:57.529209 | orchestrator | ok: [testbed-manager] 2025-05-29 00:35:57.529340 | orchestrator | changed: [testbed-node-4] 2025-05-29 00:35:57.529359 | orchestrator | changed: [testbed-node-1] 2025-05-29 00:35:57.529371 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:35:57.529615 | orchestrator | changed: [testbed-node-5] 2025-05-29 00:35:57.529640 | orchestrator | changed: [testbed-node-3] 2025-05-29 00:35:57.529895 | orchestrator | changed: [testbed-node-2] 2025-05-29 00:35:57.530264 | orchestrator | 2025-05-29 00:35:57.530953 | orchestrator | TASK [osism.services.docker : Unblock installation of python docker packages] *** 2025-05-29 00:35:57.531603 | orchestrator | Thursday 29 May 2025 00:35:57 +0000 (0:00:12.549) 0:06:21.165 ********** 2025-05-29 00:35:57.920840 | orchestrator | ok: [testbed-manager] => (item=python3-docker) 2025-05-29 00:35:58.728388 | orchestrator | ok: [testbed-node-3] => (item=python3-docker) 2025-05-29 00:35:58.729490 | orchestrator | ok: [testbed-node-4] => (item=python3-docker) 2025-05-29 00:35:58.729542 | orchestrator | ok: [testbed-node-5] => (item=python3-docker) 2025-05-29 00:35:58.730297 | orchestrator | ok: [testbed-manager] => (item=python-docker) 2025-05-29 00:35:58.730967 | orchestrator | ok: [testbed-node-0] => (item=python3-docker) 2025-05-29 00:35:58.733509 | orchestrator | ok: [testbed-node-1] => (item=python3-docker) 2025-05-29 00:35:58.734210 | orchestrator | ok: [testbed-node-2] => (item=python3-docker) 2025-05-29 00:35:58.735098 | orchestrator | ok: [testbed-node-3] => (item=python-docker) 2025-05-29 00:35:58.735276 | orchestrator | ok: [testbed-node-4] => (item=python-docker) 2025-05-29 00:35:58.736103 | orchestrator | ok: [testbed-node-5] => (item=python-docker) 2025-05-29 00:35:58.736638 | orchestrator | ok: [testbed-node-0] => (item=python-docker) 2025-05-29 00:35:58.737071 | orchestrator | ok: [testbed-node-1] => (item=python-docker) 2025-05-29 00:35:58.737856 | orchestrator | ok: [testbed-node-2] => (item=python-docker) 2025-05-29 00:35:58.738198 | orchestrator | 2025-05-29 00:35:58.738639 | orchestrator | TASK [osism.services.docker : Install python3 docker package] ****************** 2025-05-29 00:35:58.738904 | orchestrator | Thursday 29 May 2025 00:35:58 +0000 (0:00:01.206) 0:06:22.371 ********** 2025-05-29 00:35:58.857871 | orchestrator | skipping: [testbed-manager] 2025-05-29 00:35:58.928288 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:35:58.991209 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:35:59.054522 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:35:59.122225 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:35:59.255519 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:35:59.255714 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:35:59.256236 | orchestrator | 2025-05-29 00:35:59.257715 | orchestrator | TASK [osism.services.docker : Install python3 docker package from Debian Sid] *** 2025-05-29 00:35:59.257875 | orchestrator | Thursday 29 May 2025 00:35:59 +0000 (0:00:00.527) 0:06:22.899 ********** 2025-05-29 00:36:03.197481 | orchestrator | ok: [testbed-manager] 2025-05-29 00:36:03.197941 | orchestrator | changed: [testbed-node-1] 2025-05-29 00:36:03.200144 | orchestrator | changed: [testbed-node-4] 2025-05-29 00:36:03.201344 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:36:03.203501 | orchestrator | changed: [testbed-node-5] 2025-05-29 00:36:03.203707 | orchestrator | changed: [testbed-node-3] 2025-05-29 00:36:03.204574 | orchestrator | changed: [testbed-node-2] 2025-05-29 00:36:03.205453 | orchestrator | 2025-05-29 00:36:03.205874 | orchestrator | TASK [osism.services.docker : Remove python docker packages (install python bindings from pip)] *** 2025-05-29 00:36:03.206557 | orchestrator | Thursday 29 May 2025 00:36:03 +0000 (0:00:03.940) 0:06:26.840 ********** 2025-05-29 00:36:03.329995 | orchestrator | skipping: [testbed-manager] 2025-05-29 00:36:03.403478 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:36:03.471597 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:36:03.705798 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:36:03.772666 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:36:03.890973 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:36:03.891054 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:36:03.891330 | orchestrator | 2025-05-29 00:36:03.892131 | orchestrator | TASK [osism.services.docker : Block installation of python docker packages (install python bindings from pip)] *** 2025-05-29 00:36:03.892157 | orchestrator | Thursday 29 May 2025 00:36:03 +0000 (0:00:00.695) 0:06:27.536 ********** 2025-05-29 00:36:03.979900 | orchestrator | skipping: [testbed-manager] => (item=python3-docker)  2025-05-29 00:36:03.980514 | orchestrator | skipping: [testbed-manager] => (item=python-docker)  2025-05-29 00:36:04.067506 | orchestrator | skipping: [testbed-manager] 2025-05-29 00:36:04.067662 | orchestrator | skipping: [testbed-node-3] => (item=python3-docker)  2025-05-29 00:36:04.068878 | orchestrator | skipping: [testbed-node-3] => (item=python-docker)  2025-05-29 00:36:04.149540 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:36:04.149641 | orchestrator | skipping: [testbed-node-4] => (item=python3-docker)  2025-05-29 00:36:04.150625 | orchestrator | skipping: [testbed-node-4] => (item=python-docker)  2025-05-29 00:36:04.221537 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:36:04.221976 | orchestrator | skipping: [testbed-node-5] => (item=python3-docker)  2025-05-29 00:36:04.222126 | orchestrator | skipping: [testbed-node-5] => (item=python-docker)  2025-05-29 00:36:04.288482 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:36:04.288710 | orchestrator | skipping: [testbed-node-0] => (item=python3-docker)  2025-05-29 00:36:04.291048 | orchestrator | skipping: [testbed-node-0] => (item=python-docker)  2025-05-29 00:36:04.366259 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:36:04.366600 | orchestrator | skipping: [testbed-node-1] => (item=python3-docker)  2025-05-29 00:36:04.366630 | orchestrator | skipping: [testbed-node-1] => (item=python-docker)  2025-05-29 00:36:04.476319 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:36:04.476615 | orchestrator | skipping: [testbed-node-2] => (item=python3-docker)  2025-05-29 00:36:04.478070 | orchestrator | skipping: [testbed-node-2] => (item=python-docker)  2025-05-29 00:36:04.479198 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:36:04.480820 | orchestrator | 2025-05-29 00:36:04.482088 | orchestrator | TASK [osism.services.docker : Install python3-pip package (install python bindings from pip)] *** 2025-05-29 00:36:04.482551 | orchestrator | Thursday 29 May 2025 00:36:04 +0000 (0:00:00.584) 0:06:28.120 ********** 2025-05-29 00:36:04.602334 | orchestrator | skipping: [testbed-manager] 2025-05-29 00:36:04.672459 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:36:04.735888 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:36:04.797765 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:36:04.865596 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:36:04.969814 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:36:04.971771 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:36:04.972924 | orchestrator | 2025-05-29 00:36:04.974491 | orchestrator | TASK [osism.services.docker : Install docker packages (install python bindings from pip)] *** 2025-05-29 00:36:04.975710 | orchestrator | Thursday 29 May 2025 00:36:04 +0000 (0:00:00.492) 0:06:28.613 ********** 2025-05-29 00:36:05.104965 | orchestrator | skipping: [testbed-manager] 2025-05-29 00:36:05.172552 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:36:05.239082 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:36:05.307645 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:36:05.367962 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:36:05.479031 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:36:05.479517 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:36:05.481399 | orchestrator | 2025-05-29 00:36:05.482079 | orchestrator | TASK [osism.services.docker : Install packages required by docker login] ******* 2025-05-29 00:36:05.482685 | orchestrator | Thursday 29 May 2025 00:36:05 +0000 (0:00:00.508) 0:06:29.121 ********** 2025-05-29 00:36:05.624035 | orchestrator | skipping: [testbed-manager] 2025-05-29 00:36:05.689685 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:36:05.770909 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:36:05.834242 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:36:05.896010 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:36:06.020409 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:36:06.021121 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:36:06.021629 | orchestrator | 2025-05-29 00:36:06.021984 | orchestrator | TASK [osism.services.docker : Ensure that some packages are not installed] ***** 2025-05-29 00:36:06.022575 | orchestrator | Thursday 29 May 2025 00:36:06 +0000 (0:00:00.544) 0:06:29.666 ********** 2025-05-29 00:36:12.178327 | orchestrator | ok: [testbed-manager] 2025-05-29 00:36:12.178495 | orchestrator | changed: [testbed-node-4] 2025-05-29 00:36:12.178579 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:36:12.179317 | orchestrator | changed: [testbed-node-1] 2025-05-29 00:36:12.180474 | orchestrator | changed: [testbed-node-5] 2025-05-29 00:36:12.181088 | orchestrator | changed: [testbed-node-2] 2025-05-29 00:36:12.181736 | orchestrator | changed: [testbed-node-3] 2025-05-29 00:36:12.182365 | orchestrator | 2025-05-29 00:36:12.182611 | orchestrator | TASK [osism.services.docker : Include config tasks] **************************** 2025-05-29 00:36:12.183019 | orchestrator | Thursday 29 May 2025 00:36:12 +0000 (0:00:06.154) 0:06:35.821 ********** 2025-05-29 00:36:12.919335 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/config.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-29 00:36:12.919612 | orchestrator | 2025-05-29 00:36:12.921039 | orchestrator | TASK [osism.services.docker : Create plugins directory] ************************ 2025-05-29 00:36:12.922091 | orchestrator | Thursday 29 May 2025 00:36:12 +0000 (0:00:00.741) 0:06:36.563 ********** 2025-05-29 00:36:13.703086 | orchestrator | ok: [testbed-manager] 2025-05-29 00:36:13.703242 | orchestrator | changed: [testbed-node-3] 2025-05-29 00:36:13.704207 | orchestrator | changed: [testbed-node-4] 2025-05-29 00:36:13.705284 | orchestrator | changed: [testbed-node-5] 2025-05-29 00:36:13.707781 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:36:13.708261 | orchestrator | changed: [testbed-node-1] 2025-05-29 00:36:13.708955 | orchestrator | changed: [testbed-node-2] 2025-05-29 00:36:13.710288 | orchestrator | 2025-05-29 00:36:13.710512 | orchestrator | TASK [osism.services.docker : Create systemd overlay directory] **************** 2025-05-29 00:36:13.711314 | orchestrator | Thursday 29 May 2025 00:36:13 +0000 (0:00:00.781) 0:06:37.344 ********** 2025-05-29 00:36:14.514883 | orchestrator | ok: [testbed-manager] 2025-05-29 00:36:14.515628 | orchestrator | changed: [testbed-node-3] 2025-05-29 00:36:14.516825 | orchestrator | changed: [testbed-node-4] 2025-05-29 00:36:14.518431 | orchestrator | changed: [testbed-node-5] 2025-05-29 00:36:14.519053 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:36:14.519894 | orchestrator | changed: [testbed-node-1] 2025-05-29 00:36:14.520520 | orchestrator | changed: [testbed-node-2] 2025-05-29 00:36:14.521161 | orchestrator | 2025-05-29 00:36:14.521788 | orchestrator | TASK [osism.services.docker : Copy systemd overlay file] *********************** 2025-05-29 00:36:14.522080 | orchestrator | Thursday 29 May 2025 00:36:14 +0000 (0:00:00.814) 0:06:38.159 ********** 2025-05-29 00:36:15.916642 | orchestrator | ok: [testbed-manager] 2025-05-29 00:36:15.917182 | orchestrator | changed: [testbed-node-4] 2025-05-29 00:36:15.918735 | orchestrator | changed: [testbed-node-3] 2025-05-29 00:36:15.919512 | orchestrator | changed: [testbed-node-5] 2025-05-29 00:36:15.920775 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:36:15.921565 | orchestrator | changed: [testbed-node-1] 2025-05-29 00:36:15.922498 | orchestrator | changed: [testbed-node-2] 2025-05-29 00:36:15.923695 | orchestrator | 2025-05-29 00:36:15.924333 | orchestrator | TASK [osism.services.docker : Reload systemd daemon if systemd overlay file is changed] *** 2025-05-29 00:36:15.924839 | orchestrator | Thursday 29 May 2025 00:36:15 +0000 (0:00:01.401) 0:06:39.561 ********** 2025-05-29 00:36:16.031118 | orchestrator | skipping: [testbed-manager] 2025-05-29 00:36:17.210427 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:36:17.210646 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:36:17.210668 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:36:17.211782 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:36:17.212468 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:36:17.213270 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:36:17.214202 | orchestrator | 2025-05-29 00:36:17.214575 | orchestrator | TASK [osism.services.docker : Copy limits configuration file] ****************** 2025-05-29 00:36:17.214859 | orchestrator | Thursday 29 May 2025 00:36:17 +0000 (0:00:01.289) 0:06:40.850 ********** 2025-05-29 00:36:18.577771 | orchestrator | ok: [testbed-manager] 2025-05-29 00:36:18.579480 | orchestrator | changed: [testbed-node-3] 2025-05-29 00:36:18.580817 | orchestrator | changed: [testbed-node-4] 2025-05-29 00:36:18.582764 | orchestrator | changed: [testbed-node-5] 2025-05-29 00:36:18.583328 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:36:18.584119 | orchestrator | changed: [testbed-node-1] 2025-05-29 00:36:18.585056 | orchestrator | changed: [testbed-node-2] 2025-05-29 00:36:18.586118 | orchestrator | 2025-05-29 00:36:18.587149 | orchestrator | TASK [osism.services.docker : Copy daemon.json configuration file] ************* 2025-05-29 00:36:18.587200 | orchestrator | Thursday 29 May 2025 00:36:18 +0000 (0:00:01.370) 0:06:42.221 ********** 2025-05-29 00:36:20.061909 | orchestrator | changed: [testbed-manager] 2025-05-29 00:36:20.062414 | orchestrator | changed: [testbed-node-3] 2025-05-29 00:36:20.063096 | orchestrator | changed: [testbed-node-4] 2025-05-29 00:36:20.063559 | orchestrator | changed: [testbed-node-5] 2025-05-29 00:36:20.064313 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:36:20.065186 | orchestrator | changed: [testbed-node-1] 2025-05-29 00:36:20.065538 | orchestrator | changed: [testbed-node-2] 2025-05-29 00:36:20.067065 | orchestrator | 2025-05-29 00:36:20.067326 | orchestrator | TASK [osism.services.docker : Include service tasks] *************************** 2025-05-29 00:36:20.067694 | orchestrator | Thursday 29 May 2025 00:36:20 +0000 (0:00:01.477) 0:06:43.699 ********** 2025-05-29 00:36:21.106985 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/service.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-29 00:36:21.107767 | orchestrator | 2025-05-29 00:36:21.108485 | orchestrator | TASK [osism.services.docker : Reload systemd daemon] *************************** 2025-05-29 00:36:21.109214 | orchestrator | Thursday 29 May 2025 00:36:21 +0000 (0:00:01.051) 0:06:44.750 ********** 2025-05-29 00:36:22.435959 | orchestrator | ok: [testbed-manager] 2025-05-29 00:36:22.436561 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:36:22.437130 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:36:22.438792 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:36:22.439273 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:36:22.439973 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:36:22.440629 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:36:22.441104 | orchestrator | 2025-05-29 00:36:22.442310 | orchestrator | TASK [osism.services.docker : Manage service] ********************************** 2025-05-29 00:36:22.442506 | orchestrator | Thursday 29 May 2025 00:36:22 +0000 (0:00:01.328) 0:06:46.078 ********** 2025-05-29 00:36:23.543303 | orchestrator | ok: [testbed-manager] 2025-05-29 00:36:23.543956 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:36:23.546698 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:36:23.547088 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:36:23.547797 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:36:23.548819 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:36:23.550235 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:36:23.550751 | orchestrator | 2025-05-29 00:36:23.551280 | orchestrator | TASK [osism.services.docker : Manage docker socket service] ******************** 2025-05-29 00:36:23.552093 | orchestrator | Thursday 29 May 2025 00:36:23 +0000 (0:00:01.106) 0:06:47.185 ********** 2025-05-29 00:36:24.664783 | orchestrator | ok: [testbed-manager] 2025-05-29 00:36:24.664956 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:36:24.665216 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:36:24.666222 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:36:24.667291 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:36:24.667610 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:36:24.669826 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:36:24.670643 | orchestrator | 2025-05-29 00:36:24.671590 | orchestrator | TASK [osism.services.docker : Manage containerd service] *********************** 2025-05-29 00:36:24.671927 | orchestrator | Thursday 29 May 2025 00:36:24 +0000 (0:00:01.123) 0:06:48.308 ********** 2025-05-29 00:36:26.056059 | orchestrator | ok: [testbed-manager] 2025-05-29 00:36:26.056712 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:36:26.058395 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:36:26.061492 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:36:26.062187 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:36:26.063339 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:36:26.064527 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:36:26.065335 | orchestrator | 2025-05-29 00:36:26.066100 | orchestrator | TASK [osism.services.docker : Include bootstrap tasks] ************************* 2025-05-29 00:36:26.067725 | orchestrator | Thursday 29 May 2025 00:36:26 +0000 (0:00:01.390) 0:06:49.698 ********** 2025-05-29 00:36:27.188402 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/bootstrap.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-29 00:36:27.189298 | orchestrator | 2025-05-29 00:36:27.191165 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-05-29 00:36:27.192587 | orchestrator | Thursday 29 May 2025 00:36:26 +0000 (0:00:00.849) 0:06:50.547 ********** 2025-05-29 00:36:27.193171 | orchestrator | 2025-05-29 00:36:27.194743 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-05-29 00:36:27.195576 | orchestrator | Thursday 29 May 2025 00:36:26 +0000 (0:00:00.044) 0:06:50.592 ********** 2025-05-29 00:36:27.196146 | orchestrator | 2025-05-29 00:36:27.197513 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-05-29 00:36:27.198110 | orchestrator | Thursday 29 May 2025 00:36:26 +0000 (0:00:00.037) 0:06:50.629 ********** 2025-05-29 00:36:27.199130 | orchestrator | 2025-05-29 00:36:27.199708 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-05-29 00:36:27.200578 | orchestrator | Thursday 29 May 2025 00:36:27 +0000 (0:00:00.036) 0:06:50.666 ********** 2025-05-29 00:36:27.201208 | orchestrator | 2025-05-29 00:36:27.201960 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-05-29 00:36:27.202730 | orchestrator | Thursday 29 May 2025 00:36:27 +0000 (0:00:00.044) 0:06:50.710 ********** 2025-05-29 00:36:27.203527 | orchestrator | 2025-05-29 00:36:27.203871 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-05-29 00:36:27.204575 | orchestrator | Thursday 29 May 2025 00:36:27 +0000 (0:00:00.038) 0:06:50.749 ********** 2025-05-29 00:36:27.205092 | orchestrator | 2025-05-29 00:36:27.205876 | orchestrator | TASK [osism.services.docker : Flush handlers] ********************************** 2025-05-29 00:36:27.206213 | orchestrator | Thursday 29 May 2025 00:36:27 +0000 (0:00:00.037) 0:06:50.786 ********** 2025-05-29 00:36:27.206638 | orchestrator | 2025-05-29 00:36:27.207039 | orchestrator | RUNNING HANDLER [osism.commons.repository : Force update of package cache] ***** 2025-05-29 00:36:27.207544 | orchestrator | Thursday 29 May 2025 00:36:27 +0000 (0:00:00.045) 0:06:50.832 ********** 2025-05-29 00:36:28.273594 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:36:28.273689 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:36:28.273842 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:36:28.274162 | orchestrator | 2025-05-29 00:36:28.275147 | orchestrator | RUNNING HANDLER [osism.services.rsyslog : Restart rsyslog service] ************* 2025-05-29 00:36:28.275183 | orchestrator | Thursday 29 May 2025 00:36:28 +0000 (0:00:01.085) 0:06:51.917 ********** 2025-05-29 00:36:29.812957 | orchestrator | changed: [testbed-manager] 2025-05-29 00:36:29.813179 | orchestrator | changed: [testbed-node-3] 2025-05-29 00:36:29.813650 | orchestrator | changed: [testbed-node-4] 2025-05-29 00:36:29.813863 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:36:29.814322 | orchestrator | changed: [testbed-node-5] 2025-05-29 00:36:29.814677 | orchestrator | changed: [testbed-node-1] 2025-05-29 00:36:29.815167 | orchestrator | changed: [testbed-node-2] 2025-05-29 00:36:29.815595 | orchestrator | 2025-05-29 00:36:29.816091 | orchestrator | RUNNING HANDLER [osism.services.smartd : Restart smartd service] *************** 2025-05-29 00:36:29.816415 | orchestrator | Thursday 29 May 2025 00:36:29 +0000 (0:00:01.536) 0:06:53.454 ********** 2025-05-29 00:36:30.907011 | orchestrator | changed: [testbed-manager] 2025-05-29 00:36:30.908145 | orchestrator | changed: [testbed-node-3] 2025-05-29 00:36:30.909200 | orchestrator | changed: [testbed-node-4] 2025-05-29 00:36:30.909905 | orchestrator | changed: [testbed-node-5] 2025-05-29 00:36:30.910193 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:36:30.910681 | orchestrator | changed: [testbed-node-1] 2025-05-29 00:36:30.911263 | orchestrator | changed: [testbed-node-2] 2025-05-29 00:36:30.911947 | orchestrator | 2025-05-29 00:36:30.912447 | orchestrator | RUNNING HANDLER [osism.services.docker : Restart docker service] *************** 2025-05-29 00:36:30.912976 | orchestrator | Thursday 29 May 2025 00:36:30 +0000 (0:00:01.095) 0:06:54.549 ********** 2025-05-29 00:36:31.040167 | orchestrator | skipping: [testbed-manager] 2025-05-29 00:36:32.893441 | orchestrator | changed: [testbed-node-3] 2025-05-29 00:36:32.893584 | orchestrator | changed: [testbed-node-4] 2025-05-29 00:36:32.895106 | orchestrator | changed: [testbed-node-5] 2025-05-29 00:36:32.895137 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:36:32.895203 | orchestrator | changed: [testbed-node-1] 2025-05-29 00:36:32.896679 | orchestrator | changed: [testbed-node-2] 2025-05-29 00:36:32.897292 | orchestrator | 2025-05-29 00:36:32.897622 | orchestrator | RUNNING HANDLER [osism.services.docker : Wait after docker service restart] **** 2025-05-29 00:36:32.898076 | orchestrator | Thursday 29 May 2025 00:36:32 +0000 (0:00:01.985) 0:06:56.535 ********** 2025-05-29 00:36:33.002175 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:36:33.002692 | orchestrator | 2025-05-29 00:36:33.006281 | orchestrator | TASK [osism.services.docker : Add user to docker group] ************************ 2025-05-29 00:36:33.006427 | orchestrator | Thursday 29 May 2025 00:36:32 +0000 (0:00:00.108) 0:06:56.644 ********** 2025-05-29 00:36:34.023845 | orchestrator | ok: [testbed-manager] 2025-05-29 00:36:34.023951 | orchestrator | changed: [testbed-node-4] 2025-05-29 00:36:34.024540 | orchestrator | changed: [testbed-node-5] 2025-05-29 00:36:34.024608 | orchestrator | changed: [testbed-node-3] 2025-05-29 00:36:34.024614 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:36:34.024665 | orchestrator | changed: [testbed-node-1] 2025-05-29 00:36:34.024906 | orchestrator | changed: [testbed-node-2] 2025-05-29 00:36:34.025178 | orchestrator | 2025-05-29 00:36:34.025679 | orchestrator | TASK [osism.services.docker : Log into private registry and force re-authorization] *** 2025-05-29 00:36:34.025911 | orchestrator | Thursday 29 May 2025 00:36:34 +0000 (0:00:01.020) 0:06:57.664 ********** 2025-05-29 00:36:34.157905 | orchestrator | skipping: [testbed-manager] 2025-05-29 00:36:34.232652 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:36:34.291517 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:36:34.425532 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:36:34.712454 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:36:34.712617 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:36:34.715203 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:36:34.715816 | orchestrator | 2025-05-29 00:36:34.716936 | orchestrator | TASK [osism.services.docker : Include facts tasks] ***************************** 2025-05-29 00:36:34.718122 | orchestrator | Thursday 29 May 2025 00:36:34 +0000 (0:00:00.691) 0:06:58.356 ********** 2025-05-29 00:36:35.659557 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/docker/tasks/facts.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-29 00:36:35.662992 | orchestrator | 2025-05-29 00:36:35.663036 | orchestrator | TASK [osism.services.docker : Create facts directory] ************************** 2025-05-29 00:36:35.663050 | orchestrator | Thursday 29 May 2025 00:36:35 +0000 (0:00:00.944) 0:06:59.300 ********** 2025-05-29 00:36:36.064152 | orchestrator | ok: [testbed-manager] 2025-05-29 00:36:36.476218 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:36:36.477109 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:36:36.479938 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:36:36.479954 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:36:36.480709 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:36:36.481701 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:36:36.482820 | orchestrator | 2025-05-29 00:36:36.483683 | orchestrator | TASK [osism.services.docker : Copy docker fact files] ************************** 2025-05-29 00:36:36.484598 | orchestrator | Thursday 29 May 2025 00:36:36 +0000 (0:00:00.818) 0:07:00.118 ********** 2025-05-29 00:36:39.119205 | orchestrator | ok: [testbed-manager] => (item=docker_containers) 2025-05-29 00:36:39.120453 | orchestrator | changed: [testbed-node-3] => (item=docker_containers) 2025-05-29 00:36:39.120956 | orchestrator | changed: [testbed-node-4] => (item=docker_containers) 2025-05-29 00:36:39.122117 | orchestrator | changed: [testbed-node-5] => (item=docker_containers) 2025-05-29 00:36:39.122972 | orchestrator | changed: [testbed-node-0] => (item=docker_containers) 2025-05-29 00:36:39.123720 | orchestrator | changed: [testbed-node-1] => (item=docker_containers) 2025-05-29 00:36:39.124221 | orchestrator | changed: [testbed-node-2] => (item=docker_containers) 2025-05-29 00:36:39.125153 | orchestrator | ok: [testbed-manager] => (item=docker_images) 2025-05-29 00:36:39.125871 | orchestrator | changed: [testbed-node-4] => (item=docker_images) 2025-05-29 00:36:39.126166 | orchestrator | changed: [testbed-node-3] => (item=docker_images) 2025-05-29 00:36:39.127042 | orchestrator | changed: [testbed-node-5] => (item=docker_images) 2025-05-29 00:36:39.127566 | orchestrator | changed: [testbed-node-0] => (item=docker_images) 2025-05-29 00:36:39.128271 | orchestrator | changed: [testbed-node-1] => (item=docker_images) 2025-05-29 00:36:39.128629 | orchestrator | changed: [testbed-node-2] => (item=docker_images) 2025-05-29 00:36:39.129491 | orchestrator | 2025-05-29 00:36:39.129897 | orchestrator | TASK [osism.commons.docker_compose : This install type is not supported] ******* 2025-05-29 00:36:39.130501 | orchestrator | Thursday 29 May 2025 00:36:39 +0000 (0:00:02.641) 0:07:02.760 ********** 2025-05-29 00:36:39.238650 | orchestrator | skipping: [testbed-manager] 2025-05-29 00:36:39.307003 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:36:39.370288 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:36:39.432453 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:36:39.499091 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:36:39.583492 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:36:39.585167 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:36:39.586139 | orchestrator | 2025-05-29 00:36:39.587110 | orchestrator | TASK [osism.commons.docker_compose : Include distribution specific install tasks] *** 2025-05-29 00:36:39.588348 | orchestrator | Thursday 29 May 2025 00:36:39 +0000 (0:00:00.466) 0:07:03.226 ********** 2025-05-29 00:36:40.389422 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/docker_compose/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-29 00:36:40.392419 | orchestrator | 2025-05-29 00:36:40.392476 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose apt preferences file] *** 2025-05-29 00:36:40.392492 | orchestrator | Thursday 29 May 2025 00:36:40 +0000 (0:00:00.804) 0:07:04.031 ********** 2025-05-29 00:36:41.243868 | orchestrator | ok: [testbed-manager] 2025-05-29 00:36:41.245370 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:36:41.245477 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:36:41.246135 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:36:41.246799 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:36:41.247825 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:36:41.248277 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:36:41.248851 | orchestrator | 2025-05-29 00:36:41.249313 | orchestrator | TASK [osism.commons.docker_compose : Get checksum of docker-compose file] ****** 2025-05-29 00:36:41.249806 | orchestrator | Thursday 29 May 2025 00:36:41 +0000 (0:00:00.854) 0:07:04.886 ********** 2025-05-29 00:36:41.646602 | orchestrator | ok: [testbed-manager] 2025-05-29 00:36:41.722196 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:36:42.272274 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:36:42.273135 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:36:42.275975 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:36:42.276630 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:36:42.277044 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:36:42.277521 | orchestrator | 2025-05-29 00:36:42.278329 | orchestrator | TASK [osism.commons.docker_compose : Remove docker-compose binary] ************* 2025-05-29 00:36:42.278525 | orchestrator | Thursday 29 May 2025 00:36:42 +0000 (0:00:01.028) 0:07:05.914 ********** 2025-05-29 00:36:42.425787 | orchestrator | skipping: [testbed-manager] 2025-05-29 00:36:42.489386 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:36:42.576141 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:36:42.638678 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:36:42.706551 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:36:42.809752 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:36:42.810331 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:36:42.811123 | orchestrator | 2025-05-29 00:36:42.811821 | orchestrator | TASK [osism.commons.docker_compose : Uninstall docker-compose package] ********* 2025-05-29 00:36:42.812704 | orchestrator | Thursday 29 May 2025 00:36:42 +0000 (0:00:00.540) 0:07:06.455 ********** 2025-05-29 00:36:44.200855 | orchestrator | ok: [testbed-manager] 2025-05-29 00:36:44.201467 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:36:44.202004 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:36:44.202781 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:36:44.203930 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:36:44.204566 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:36:44.204966 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:36:44.206774 | orchestrator | 2025-05-29 00:36:44.207618 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose script] *************** 2025-05-29 00:36:44.208058 | orchestrator | Thursday 29 May 2025 00:36:44 +0000 (0:00:01.388) 0:07:07.843 ********** 2025-05-29 00:36:44.361168 | orchestrator | skipping: [testbed-manager] 2025-05-29 00:36:44.426564 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:36:44.489470 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:36:44.563330 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:36:44.624602 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:36:44.712468 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:36:44.713969 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:36:44.714012 | orchestrator | 2025-05-29 00:36:44.714076 | orchestrator | TASK [osism.commons.docker_compose : Install docker-compose-plugin package] **** 2025-05-29 00:36:44.714090 | orchestrator | Thursday 29 May 2025 00:36:44 +0000 (0:00:00.507) 0:07:08.351 ********** 2025-05-29 00:36:46.788160 | orchestrator | ok: [testbed-manager] 2025-05-29 00:36:46.789023 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:36:46.791008 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:36:46.792848 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:36:46.793848 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:36:46.795191 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:36:46.795838 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:36:46.796781 | orchestrator | 2025-05-29 00:36:46.797483 | orchestrator | TASK [osism.commons.docker_compose : Copy osism.target systemd file] *********** 2025-05-29 00:36:46.797956 | orchestrator | Thursday 29 May 2025 00:36:46 +0000 (0:00:02.079) 0:07:10.430 ********** 2025-05-29 00:36:48.108218 | orchestrator | ok: [testbed-manager] 2025-05-29 00:36:48.108512 | orchestrator | changed: [testbed-node-3] 2025-05-29 00:36:48.109074 | orchestrator | changed: [testbed-node-4] 2025-05-29 00:36:48.109167 | orchestrator | changed: [testbed-node-5] 2025-05-29 00:36:48.109887 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:36:48.109912 | orchestrator | changed: [testbed-node-1] 2025-05-29 00:36:48.110318 | orchestrator | changed: [testbed-node-2] 2025-05-29 00:36:48.110665 | orchestrator | 2025-05-29 00:36:48.111060 | orchestrator | TASK [osism.commons.docker_compose : Enable osism.target] ********************** 2025-05-29 00:36:48.111377 | orchestrator | Thursday 29 May 2025 00:36:48 +0000 (0:00:01.320) 0:07:11.750 ********** 2025-05-29 00:36:49.834641 | orchestrator | ok: [testbed-manager] 2025-05-29 00:36:49.834991 | orchestrator | changed: [testbed-node-3] 2025-05-29 00:36:49.835373 | orchestrator | changed: [testbed-node-4] 2025-05-29 00:36:49.836141 | orchestrator | changed: [testbed-node-5] 2025-05-29 00:36:49.836299 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:36:49.837603 | orchestrator | changed: [testbed-node-1] 2025-05-29 00:36:49.837636 | orchestrator | changed: [testbed-node-2] 2025-05-29 00:36:49.838076 | orchestrator | 2025-05-29 00:36:49.838527 | orchestrator | TASK [osism.commons.docker_compose : Copy docker-compose systemd unit file] **** 2025-05-29 00:36:49.840048 | orchestrator | Thursday 29 May 2025 00:36:49 +0000 (0:00:01.727) 0:07:13.478 ********** 2025-05-29 00:36:51.555912 | orchestrator | ok: [testbed-manager] 2025-05-29 00:36:51.557235 | orchestrator | changed: [testbed-node-3] 2025-05-29 00:36:51.558516 | orchestrator | changed: [testbed-node-4] 2025-05-29 00:36:51.560005 | orchestrator | changed: [testbed-node-5] 2025-05-29 00:36:51.560914 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:36:51.561622 | orchestrator | changed: [testbed-node-1] 2025-05-29 00:36:51.562200 | orchestrator | changed: [testbed-node-2] 2025-05-29 00:36:51.563187 | orchestrator | 2025-05-29 00:36:51.563685 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-05-29 00:36:51.564536 | orchestrator | Thursday 29 May 2025 00:36:51 +0000 (0:00:01.721) 0:07:15.200 ********** 2025-05-29 00:36:52.165803 | orchestrator | ok: [testbed-manager] 2025-05-29 00:36:52.612784 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:36:52.613888 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:36:52.614790 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:36:52.615145 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:36:52.616142 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:36:52.617959 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:36:52.618428 | orchestrator | 2025-05-29 00:36:52.618918 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-05-29 00:36:52.619531 | orchestrator | Thursday 29 May 2025 00:36:52 +0000 (0:00:01.055) 0:07:16.255 ********** 2025-05-29 00:36:52.744640 | orchestrator | skipping: [testbed-manager] 2025-05-29 00:36:52.808780 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:36:52.874409 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:36:52.946084 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:36:53.009327 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:36:53.412884 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:36:53.413140 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:36:53.414775 | orchestrator | 2025-05-29 00:36:53.415392 | orchestrator | TASK [osism.services.chrony : Check minimum and maximum number of servers] ***** 2025-05-29 00:36:53.416118 | orchestrator | Thursday 29 May 2025 00:36:53 +0000 (0:00:00.800) 0:07:17.056 ********** 2025-05-29 00:36:53.561991 | orchestrator | skipping: [testbed-manager] 2025-05-29 00:36:53.636463 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:36:53.710085 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:36:53.775937 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:36:53.866317 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:36:53.965309 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:36:53.965860 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:36:53.967058 | orchestrator | 2025-05-29 00:36:53.967770 | orchestrator | TASK [osism.services.chrony : Gather variables for each operating system] ****** 2025-05-29 00:36:53.968809 | orchestrator | Thursday 29 May 2025 00:36:53 +0000 (0:00:00.552) 0:07:17.608 ********** 2025-05-29 00:36:54.101588 | orchestrator | ok: [testbed-manager] 2025-05-29 00:36:54.167115 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:36:54.230534 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:36:54.302890 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:36:54.363013 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:36:54.470819 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:36:54.470980 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:36:54.471304 | orchestrator | 2025-05-29 00:36:54.471697 | orchestrator | TASK [osism.services.chrony : Set chrony_conf_file variable to default value] *** 2025-05-29 00:36:54.471912 | orchestrator | Thursday 29 May 2025 00:36:54 +0000 (0:00:00.506) 0:07:18.115 ********** 2025-05-29 00:36:54.601812 | orchestrator | ok: [testbed-manager] 2025-05-29 00:36:54.666164 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:36:54.913008 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:36:54.981481 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:36:55.045200 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:36:55.162078 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:36:55.162141 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:36:55.162708 | orchestrator | 2025-05-29 00:36:55.163562 | orchestrator | TASK [osism.services.chrony : Set chrony_key_file variable to default value] *** 2025-05-29 00:36:55.164002 | orchestrator | Thursday 29 May 2025 00:36:55 +0000 (0:00:00.688) 0:07:18.803 ********** 2025-05-29 00:36:55.294562 | orchestrator | ok: [testbed-manager] 2025-05-29 00:36:55.370600 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:36:55.431654 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:36:55.496739 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:36:55.579462 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:36:55.692022 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:36:55.692480 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:36:55.693876 | orchestrator | 2025-05-29 00:36:55.694836 | orchestrator | TASK [osism.services.chrony : Populate service facts] ************************** 2025-05-29 00:36:55.695442 | orchestrator | Thursday 29 May 2025 00:36:55 +0000 (0:00:00.530) 0:07:19.334 ********** 2025-05-29 00:37:01.421550 | orchestrator | ok: [testbed-manager] 2025-05-29 00:37:01.421730 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:37:01.422549 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:37:01.423792 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:37:01.424932 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:37:01.426824 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:37:01.426851 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:37:01.426863 | orchestrator | 2025-05-29 00:37:01.427329 | orchestrator | TASK [osism.services.chrony : Manage timesyncd service] ************************ 2025-05-29 00:37:01.427649 | orchestrator | Thursday 29 May 2025 00:37:01 +0000 (0:00:05.730) 0:07:25.064 ********** 2025-05-29 00:37:01.557815 | orchestrator | skipping: [testbed-manager] 2025-05-29 00:37:01.624213 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:37:01.691754 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:37:01.754324 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:37:01.815823 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:37:01.936741 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:37:01.938141 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:37:01.939002 | orchestrator | 2025-05-29 00:37:01.940188 | orchestrator | TASK [osism.services.chrony : Include distribution specific install tasks] ***** 2025-05-29 00:37:01.940669 | orchestrator | Thursday 29 May 2025 00:37:01 +0000 (0:00:00.517) 0:07:25.582 ********** 2025-05-29 00:37:02.917405 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-29 00:37:02.918121 | orchestrator | 2025-05-29 00:37:02.918191 | orchestrator | TASK [osism.services.chrony : Install package] ********************************* 2025-05-29 00:37:02.918853 | orchestrator | Thursday 29 May 2025 00:37:02 +0000 (0:00:00.977) 0:07:26.559 ********** 2025-05-29 00:37:04.654950 | orchestrator | ok: [testbed-manager] 2025-05-29 00:37:04.657853 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:37:04.657887 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:37:04.657899 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:37:04.659983 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:37:04.660918 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:37:04.661305 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:37:04.662581 | orchestrator | 2025-05-29 00:37:04.662752 | orchestrator | TASK [osism.services.chrony : Manage chrony service] *************************** 2025-05-29 00:37:04.663969 | orchestrator | Thursday 29 May 2025 00:37:04 +0000 (0:00:01.738) 0:07:28.298 ********** 2025-05-29 00:37:05.780740 | orchestrator | ok: [testbed-manager] 2025-05-29 00:37:05.780929 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:37:05.781149 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:37:05.782368 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:37:05.785045 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:37:05.785122 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:37:05.785135 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:37:05.785147 | orchestrator | 2025-05-29 00:37:05.785293 | orchestrator | TASK [osism.services.chrony : Check if configuration file exists] ************** 2025-05-29 00:37:05.785795 | orchestrator | Thursday 29 May 2025 00:37:05 +0000 (0:00:01.122) 0:07:29.421 ********** 2025-05-29 00:37:06.633531 | orchestrator | ok: [testbed-manager] 2025-05-29 00:37:06.634390 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:37:06.636149 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:37:06.636472 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:37:06.638282 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:37:06.638929 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:37:06.639879 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:37:06.642171 | orchestrator | 2025-05-29 00:37:06.642894 | orchestrator | TASK [osism.services.chrony : Copy configuration file] ************************* 2025-05-29 00:37:06.643777 | orchestrator | Thursday 29 May 2025 00:37:06 +0000 (0:00:00.856) 0:07:30.278 ********** 2025-05-29 00:37:08.536060 | orchestrator | changed: [testbed-manager] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-05-29 00:37:08.538160 | orchestrator | changed: [testbed-node-3] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-05-29 00:37:08.538584 | orchestrator | changed: [testbed-node-4] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-05-29 00:37:08.540625 | orchestrator | changed: [testbed-node-5] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-05-29 00:37:08.541244 | orchestrator | changed: [testbed-node-0] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-05-29 00:37:08.542394 | orchestrator | changed: [testbed-node-1] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-05-29 00:37:08.543532 | orchestrator | changed: [testbed-node-2] => (item=/usr/share/ansible/collections/ansible_collections/osism/services/roles/chrony/templates/chrony.conf.j2) 2025-05-29 00:37:08.544145 | orchestrator | 2025-05-29 00:37:08.544833 | orchestrator | TASK [osism.services.lldpd : Include distribution specific install tasks] ****** 2025-05-29 00:37:08.545965 | orchestrator | Thursday 29 May 2025 00:37:08 +0000 (0:00:01.900) 0:07:32.179 ********** 2025-05-29 00:37:09.331713 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/lldpd/tasks/install-Debian-family.yml for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-29 00:37:09.332138 | orchestrator | 2025-05-29 00:37:09.333357 | orchestrator | TASK [osism.services.lldpd : Install lldpd package] **************************** 2025-05-29 00:37:09.334175 | orchestrator | Thursday 29 May 2025 00:37:09 +0000 (0:00:00.795) 0:07:32.974 ********** 2025-05-29 00:37:18.645734 | orchestrator | changed: [testbed-manager] 2025-05-29 00:37:18.645883 | orchestrator | changed: [testbed-node-1] 2025-05-29 00:37:18.646790 | orchestrator | changed: [testbed-node-4] 2025-05-29 00:37:18.647443 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:37:18.649475 | orchestrator | changed: [testbed-node-2] 2025-05-29 00:37:18.649792 | orchestrator | changed: [testbed-node-5] 2025-05-29 00:37:18.650841 | orchestrator | changed: [testbed-node-3] 2025-05-29 00:37:18.651407 | orchestrator | 2025-05-29 00:37:18.651755 | orchestrator | TASK [osism.services.lldpd : Manage lldpd service] ***************************** 2025-05-29 00:37:18.653135 | orchestrator | Thursday 29 May 2025 00:37:18 +0000 (0:00:09.312) 0:07:42.286 ********** 2025-05-29 00:37:20.583280 | orchestrator | ok: [testbed-manager] 2025-05-29 00:37:20.584105 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:37:20.588355 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:37:20.588445 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:37:20.588460 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:37:20.588473 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:37:20.588484 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:37:20.588495 | orchestrator | 2025-05-29 00:37:20.588911 | orchestrator | RUNNING HANDLER [osism.commons.docker_compose : Reload systemd daemon] ********* 2025-05-29 00:37:20.589306 | orchestrator | Thursday 29 May 2025 00:37:20 +0000 (0:00:01.938) 0:07:44.225 ********** 2025-05-29 00:37:21.817485 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:37:21.817565 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:37:21.817571 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:37:21.817575 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:37:21.817579 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:37:21.817583 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:37:21.817587 | orchestrator | 2025-05-29 00:37:21.817619 | orchestrator | RUNNING HANDLER [osism.services.chrony : Restart chrony service] *************** 2025-05-29 00:37:21.817721 | orchestrator | Thursday 29 May 2025 00:37:21 +0000 (0:00:01.232) 0:07:45.457 ********** 2025-05-29 00:37:23.138218 | orchestrator | changed: [testbed-manager] 2025-05-29 00:37:23.140496 | orchestrator | changed: [testbed-node-4] 2025-05-29 00:37:23.140531 | orchestrator | changed: [testbed-node-3] 2025-05-29 00:37:23.143242 | orchestrator | changed: [testbed-node-5] 2025-05-29 00:37:23.143697 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:37:23.144584 | orchestrator | changed: [testbed-node-1] 2025-05-29 00:37:23.145592 | orchestrator | changed: [testbed-node-2] 2025-05-29 00:37:23.146555 | orchestrator | 2025-05-29 00:37:23.146743 | orchestrator | PLAY [Apply bootstrap role part 2] ********************************************* 2025-05-29 00:37:23.147820 | orchestrator | 2025-05-29 00:37:23.148399 | orchestrator | TASK [Include hardening role] ************************************************** 2025-05-29 00:37:23.148990 | orchestrator | Thursday 29 May 2025 00:37:23 +0000 (0:00:01.325) 0:07:46.782 ********** 2025-05-29 00:37:23.254874 | orchestrator | skipping: [testbed-manager] 2025-05-29 00:37:23.321215 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:37:23.380490 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:37:23.439271 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:37:23.504866 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:37:23.629422 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:37:23.629511 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:37:23.630525 | orchestrator | 2025-05-29 00:37:23.633912 | orchestrator | PLAY [Apply bootstrap roles part 3] ******************************************** 2025-05-29 00:37:23.633955 | orchestrator | 2025-05-29 00:37:23.633970 | orchestrator | TASK [osism.services.journald : Copy configuration file] *********************** 2025-05-29 00:37:23.634458 | orchestrator | Thursday 29 May 2025 00:37:23 +0000 (0:00:00.489) 0:07:47.272 ********** 2025-05-29 00:37:24.906423 | orchestrator | changed: [testbed-manager] 2025-05-29 00:37:24.909009 | orchestrator | changed: [testbed-node-3] 2025-05-29 00:37:24.909071 | orchestrator | changed: [testbed-node-4] 2025-05-29 00:37:24.912603 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:37:24.912661 | orchestrator | changed: [testbed-node-5] 2025-05-29 00:37:24.912681 | orchestrator | changed: [testbed-node-1] 2025-05-29 00:37:24.912700 | orchestrator | changed: [testbed-node-2] 2025-05-29 00:37:24.912720 | orchestrator | 2025-05-29 00:37:24.912741 | orchestrator | TASK [osism.services.journald : Manage journald service] *********************** 2025-05-29 00:37:24.912854 | orchestrator | Thursday 29 May 2025 00:37:24 +0000 (0:00:01.276) 0:07:48.549 ********** 2025-05-29 00:37:26.366461 | orchestrator | ok: [testbed-manager] 2025-05-29 00:37:26.367349 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:37:26.367388 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:37:26.368397 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:37:26.369263 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:37:26.371962 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:37:26.372059 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:37:26.372074 | orchestrator | 2025-05-29 00:37:26.372087 | orchestrator | TASK [Include auditd role] ***************************************************** 2025-05-29 00:37:26.372475 | orchestrator | Thursday 29 May 2025 00:37:26 +0000 (0:00:01.460) 0:07:50.009 ********** 2025-05-29 00:37:26.490970 | orchestrator | skipping: [testbed-manager] 2025-05-29 00:37:26.550789 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:37:26.610421 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:37:26.823956 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:37:26.882742 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:37:27.261543 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:37:27.261727 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:37:27.262426 | orchestrator | 2025-05-29 00:37:27.262913 | orchestrator | RUNNING HANDLER [osism.services.journald : Restart journald service] *********** 2025-05-29 00:37:27.265780 | orchestrator | Thursday 29 May 2025 00:37:27 +0000 (0:00:00.894) 0:07:50.904 ********** 2025-05-29 00:37:28.475943 | orchestrator | changed: [testbed-manager] 2025-05-29 00:37:28.476228 | orchestrator | changed: [testbed-node-3] 2025-05-29 00:37:28.476740 | orchestrator | changed: [testbed-node-4] 2025-05-29 00:37:28.477089 | orchestrator | changed: [testbed-node-5] 2025-05-29 00:37:28.477875 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:37:28.478294 | orchestrator | changed: [testbed-node-1] 2025-05-29 00:37:28.480264 | orchestrator | changed: [testbed-node-2] 2025-05-29 00:37:28.480861 | orchestrator | 2025-05-29 00:37:28.481113 | orchestrator | PLAY [Set state bootstrap] ***************************************************** 2025-05-29 00:37:28.481473 | orchestrator | 2025-05-29 00:37:28.481690 | orchestrator | TASK [Set osism.bootstrap.status fact] ***************************************** 2025-05-29 00:37:28.482305 | orchestrator | Thursday 29 May 2025 00:37:28 +0000 (0:00:01.215) 0:07:52.120 ********** 2025-05-29 00:37:29.249525 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-29 00:37:29.249650 | orchestrator | 2025-05-29 00:37:29.249782 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-05-29 00:37:29.250515 | orchestrator | Thursday 29 May 2025 00:37:29 +0000 (0:00:00.773) 0:07:52.893 ********** 2025-05-29 00:37:29.709728 | orchestrator | ok: [testbed-manager] 2025-05-29 00:37:29.779914 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:37:30.307758 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:37:30.309189 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:37:30.310321 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:37:30.311466 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:37:30.312190 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:37:30.312575 | orchestrator | 2025-05-29 00:37:30.313228 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-05-29 00:37:30.313874 | orchestrator | Thursday 29 May 2025 00:37:30 +0000 (0:00:01.055) 0:07:53.948 ********** 2025-05-29 00:37:31.463500 | orchestrator | changed: [testbed-node-4] 2025-05-29 00:37:31.464179 | orchestrator | changed: [testbed-manager] 2025-05-29 00:37:31.465054 | orchestrator | changed: [testbed-node-3] 2025-05-29 00:37:31.466568 | orchestrator | changed: [testbed-node-5] 2025-05-29 00:37:31.467153 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:37:31.467919 | orchestrator | changed: [testbed-node-1] 2025-05-29 00:37:31.468602 | orchestrator | changed: [testbed-node-2] 2025-05-29 00:37:31.469204 | orchestrator | 2025-05-29 00:37:31.469963 | orchestrator | TASK [Set osism.bootstrap.timestamp fact] ************************************** 2025-05-29 00:37:31.470776 | orchestrator | Thursday 29 May 2025 00:37:31 +0000 (0:00:01.158) 0:07:55.106 ********** 2025-05-29 00:37:32.409088 | orchestrator | included: osism.commons.state for testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-29 00:37:32.409258 | orchestrator | 2025-05-29 00:37:32.409690 | orchestrator | TASK [osism.commons.state : Create custom facts directory] ********************* 2025-05-29 00:37:32.410365 | orchestrator | Thursday 29 May 2025 00:37:32 +0000 (0:00:00.944) 0:07:56.051 ********** 2025-05-29 00:37:32.804287 | orchestrator | ok: [testbed-manager] 2025-05-29 00:37:33.243654 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:37:33.243820 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:37:33.243939 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:37:33.245254 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:37:33.246226 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:37:33.247172 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:37:33.247777 | orchestrator | 2025-05-29 00:37:33.248576 | orchestrator | TASK [osism.commons.state : Write state into file] ***************************** 2025-05-29 00:37:33.250540 | orchestrator | Thursday 29 May 2025 00:37:33 +0000 (0:00:00.831) 0:07:56.883 ********** 2025-05-29 00:37:33.666397 | orchestrator | changed: [testbed-manager] 2025-05-29 00:37:34.338282 | orchestrator | changed: [testbed-node-3] 2025-05-29 00:37:34.338670 | orchestrator | changed: [testbed-node-4] 2025-05-29 00:37:34.339748 | orchestrator | changed: [testbed-node-5] 2025-05-29 00:37:34.340866 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:37:34.342349 | orchestrator | changed: [testbed-node-1] 2025-05-29 00:37:34.343698 | orchestrator | changed: [testbed-node-2] 2025-05-29 00:37:34.344784 | orchestrator | 2025-05-29 00:37:34.346073 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-29 00:37:34.347534 | orchestrator | 2025-05-29 00:37:34 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-29 00:37:34.347563 | orchestrator | 2025-05-29 00:37:34 | INFO  | Please wait and do not abort execution. 2025-05-29 00:37:34.347949 | orchestrator | testbed-manager : ok=160  changed=39  unreachable=0 failed=0 skipped=41  rescued=0 ignored=0 2025-05-29 00:37:34.349207 | orchestrator | testbed-node-0 : ok=168  changed=65  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-05-29 00:37:34.349856 | orchestrator | testbed-node-1 : ok=168  changed=65  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-05-29 00:37:34.350758 | orchestrator | testbed-node-2 : ok=168  changed=66  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-05-29 00:37:34.352019 | orchestrator | testbed-node-3 : ok=167  changed=63  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2025-05-29 00:37:34.352051 | orchestrator | testbed-node-4 : ok=167  changed=63  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-05-29 00:37:34.352550 | orchestrator | testbed-node-5 : ok=167  changed=63  unreachable=0 failed=0 skipped=36  rescued=0 ignored=0 2025-05-29 00:37:34.353040 | orchestrator | 2025-05-29 00:37:34.353915 | orchestrator | Thursday 29 May 2025 00:37:34 +0000 (0:00:01.097) 0:07:57.980 ********** 2025-05-29 00:37:34.353944 | orchestrator | =============================================================================== 2025-05-29 00:37:34.354694 | orchestrator | osism.commons.packages : Install required packages --------------------- 83.76s 2025-05-29 00:37:34.354926 | orchestrator | osism.commons.packages : Download required packages -------------------- 38.05s 2025-05-29 00:37:34.355662 | orchestrator | osism.commons.cleanup : Cleanup installed packages --------------------- 33.12s 2025-05-29 00:37:34.356177 | orchestrator | osism.commons.repository : Update package cache ------------------------ 12.63s 2025-05-29 00:37:34.356663 | orchestrator | osism.services.docker : Install docker package ------------------------- 12.55s 2025-05-29 00:37:34.357231 | orchestrator | osism.services.docker : Install docker-cli package --------------------- 12.27s 2025-05-29 00:37:34.357755 | orchestrator | osism.commons.packages : Remove dependencies that are no longer required -- 12.19s 2025-05-29 00:37:34.358458 | orchestrator | osism.commons.systohc : Install util-linux-extra package --------------- 11.59s 2025-05-29 00:37:34.359079 | orchestrator | osism.services.docker : Install apt-transport-https package ------------ 11.40s 2025-05-29 00:37:34.359550 | orchestrator | osism.services.docker : Install containerd package ---------------------- 9.55s 2025-05-29 00:37:34.359838 | orchestrator | osism.services.lldpd : Install lldpd package ---------------------------- 9.31s 2025-05-29 00:37:34.361021 | orchestrator | osism.services.smartd : Install smartmontools package ------------------- 8.49s 2025-05-29 00:37:34.361170 | orchestrator | osism.commons.cleanup : Remove cloudinit package ------------------------ 8.05s 2025-05-29 00:37:34.361432 | orchestrator | osism.services.docker : Add repository ---------------------------------- 7.67s 2025-05-29 00:37:34.361857 | orchestrator | osism.services.rng : Install rng package -------------------------------- 7.50s 2025-05-29 00:37:34.362336 | orchestrator | osism.commons.cleanup : Uninstall unattended-upgrades package ----------- 7.28s 2025-05-29 00:37:34.362757 | orchestrator | osism.services.docker : Ensure that some packages are not installed ----- 6.15s 2025-05-29 00:37:34.363440 | orchestrator | osism.commons.cleanup : Populate service facts -------------------------- 5.81s 2025-05-29 00:37:34.363982 | orchestrator | osism.commons.services : Populate service facts ------------------------- 5.80s 2025-05-29 00:37:34.364016 | orchestrator | osism.services.chrony : Populate service facts -------------------------- 5.73s 2025-05-29 00:37:35.011961 | orchestrator | + [[ -e /etc/redhat-release ]] 2025-05-29 00:37:35.012046 | orchestrator | + osism apply network 2025-05-29 00:37:36.786200 | orchestrator | 2025-05-29 00:37:36 | INFO  | Task 17158cd0-9e15-4a3f-99fb-1383daa740ca (network) was prepared for execution. 2025-05-29 00:37:36.786293 | orchestrator | 2025-05-29 00:37:36 | INFO  | It takes a moment until task 17158cd0-9e15-4a3f-99fb-1383daa740ca (network) has been started and output is visible here. 2025-05-29 00:37:40.029794 | orchestrator | 2025-05-29 00:37:40.030490 | orchestrator | PLAY [Apply role network] ****************************************************** 2025-05-29 00:37:40.030528 | orchestrator | 2025-05-29 00:37:40.032874 | orchestrator | TASK [osism.commons.network : Gather variables for each operating system] ****** 2025-05-29 00:37:40.032956 | orchestrator | Thursday 29 May 2025 00:37:40 +0000 (0:00:00.208) 0:00:00.208 ********** 2025-05-29 00:37:40.175700 | orchestrator | ok: [testbed-manager] 2025-05-29 00:37:40.250418 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:37:40.335730 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:37:40.411293 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:37:40.489267 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:37:40.719495 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:37:40.720284 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:37:40.720405 | orchestrator | 2025-05-29 00:37:40.721314 | orchestrator | TASK [osism.commons.network : Include type specific tasks] ********************* 2025-05-29 00:37:40.721768 | orchestrator | Thursday 29 May 2025 00:37:40 +0000 (0:00:00.690) 0:00:00.899 ********** 2025-05-29 00:37:41.891076 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/netplan-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-29 00:37:41.891595 | orchestrator | 2025-05-29 00:37:41.892726 | orchestrator | TASK [osism.commons.network : Install required packages] *********************** 2025-05-29 00:37:41.893465 | orchestrator | Thursday 29 May 2025 00:37:41 +0000 (0:00:01.169) 0:00:02.068 ********** 2025-05-29 00:37:44.029441 | orchestrator | ok: [testbed-manager] 2025-05-29 00:37:44.030172 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:37:44.030787 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:37:44.031867 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:37:44.034189 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:37:44.036554 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:37:44.036583 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:37:44.037027 | orchestrator | 2025-05-29 00:37:44.038863 | orchestrator | TASK [osism.commons.network : Remove ifupdown package] ************************* 2025-05-29 00:37:44.039830 | orchestrator | Thursday 29 May 2025 00:37:44 +0000 (0:00:02.137) 0:00:04.206 ********** 2025-05-29 00:37:45.831554 | orchestrator | ok: [testbed-manager] 2025-05-29 00:37:45.832217 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:37:45.835651 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:37:45.835688 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:37:45.835700 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:37:45.835740 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:37:45.835802 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:37:45.836891 | orchestrator | 2025-05-29 00:37:45.837534 | orchestrator | TASK [osism.commons.network : Create required directories] ********************* 2025-05-29 00:37:45.838547 | orchestrator | Thursday 29 May 2025 00:37:45 +0000 (0:00:01.802) 0:00:06.009 ********** 2025-05-29 00:37:46.334923 | orchestrator | ok: [testbed-manager] => (item=/etc/netplan) 2025-05-29 00:37:46.335123 | orchestrator | ok: [testbed-node-0] => (item=/etc/netplan) 2025-05-29 00:37:46.929703 | orchestrator | ok: [testbed-node-1] => (item=/etc/netplan) 2025-05-29 00:37:46.930478 | orchestrator | ok: [testbed-node-2] => (item=/etc/netplan) 2025-05-29 00:37:46.931252 | orchestrator | ok: [testbed-node-3] => (item=/etc/netplan) 2025-05-29 00:37:46.932412 | orchestrator | ok: [testbed-node-4] => (item=/etc/netplan) 2025-05-29 00:37:46.933742 | orchestrator | ok: [testbed-node-5] => (item=/etc/netplan) 2025-05-29 00:37:46.935464 | orchestrator | 2025-05-29 00:37:46.936533 | orchestrator | TASK [osism.commons.network : Prepare netplan configuration template] ********** 2025-05-29 00:37:46.937630 | orchestrator | Thursday 29 May 2025 00:37:46 +0000 (0:00:01.099) 0:00:07.108 ********** 2025-05-29 00:37:48.849311 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-29 00:37:48.849537 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-05-29 00:37:48.850426 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-29 00:37:48.851398 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-05-29 00:37:48.854906 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-05-29 00:37:48.854959 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-05-29 00:37:48.854970 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-05-29 00:37:48.854980 | orchestrator | 2025-05-29 00:37:48.855264 | orchestrator | TASK [osism.commons.network : Copy netplan configuration] ********************** 2025-05-29 00:37:48.856613 | orchestrator | Thursday 29 May 2025 00:37:48 +0000 (0:00:01.920) 0:00:09.029 ********** 2025-05-29 00:37:50.557351 | orchestrator | changed: [testbed-manager] 2025-05-29 00:37:50.557950 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:37:50.561557 | orchestrator | changed: [testbed-node-1] 2025-05-29 00:37:50.561597 | orchestrator | changed: [testbed-node-2] 2025-05-29 00:37:50.561706 | orchestrator | changed: [testbed-node-3] 2025-05-29 00:37:50.563118 | orchestrator | changed: [testbed-node-4] 2025-05-29 00:37:50.564263 | orchestrator | changed: [testbed-node-5] 2025-05-29 00:37:50.564961 | orchestrator | 2025-05-29 00:37:50.567472 | orchestrator | TASK [osism.commons.network : Remove netplan configuration template] *********** 2025-05-29 00:37:50.567526 | orchestrator | Thursday 29 May 2025 00:37:50 +0000 (0:00:01.705) 0:00:10.734 ********** 2025-05-29 00:37:51.055116 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-29 00:37:51.132649 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-29 00:37:51.562968 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-05-29 00:37:51.564687 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-05-29 00:37:51.566460 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-05-29 00:37:51.566926 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-05-29 00:37:51.568170 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-05-29 00:37:51.569235 | orchestrator | 2025-05-29 00:37:51.569987 | orchestrator | TASK [osism.commons.network : Check if path for interface file exists] ********* 2025-05-29 00:37:51.570812 | orchestrator | Thursday 29 May 2025 00:37:51 +0000 (0:00:01.009) 0:00:11.744 ********** 2025-05-29 00:37:52.013899 | orchestrator | ok: [testbed-manager] 2025-05-29 00:37:52.102146 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:37:52.708734 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:37:52.709266 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:37:52.711854 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:37:52.712875 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:37:52.713487 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:37:52.714431 | orchestrator | 2025-05-29 00:37:52.715180 | orchestrator | TASK [osism.commons.network : Copy interfaces file] **************************** 2025-05-29 00:37:52.715764 | orchestrator | Thursday 29 May 2025 00:37:52 +0000 (0:00:01.141) 0:00:12.886 ********** 2025-05-29 00:37:52.866818 | orchestrator | skipping: [testbed-manager] 2025-05-29 00:37:52.968369 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:37:53.064817 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:37:53.138306 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:37:53.214856 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:37:53.487204 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:37:53.487930 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:37:53.489187 | orchestrator | 2025-05-29 00:37:53.489808 | orchestrator | TASK [osism.commons.network : Install package networkd-dispatcher] ************* 2025-05-29 00:37:53.490921 | orchestrator | Thursday 29 May 2025 00:37:53 +0000 (0:00:00.780) 0:00:13.666 ********** 2025-05-29 00:37:55.501468 | orchestrator | ok: [testbed-manager] 2025-05-29 00:37:55.502978 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:37:55.504211 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:37:55.505652 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:37:55.508116 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:37:55.508794 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:37:55.509886 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:37:55.510562 | orchestrator | 2025-05-29 00:37:55.511569 | orchestrator | TASK [osism.commons.network : Copy dispatcher scripts] ************************* 2025-05-29 00:37:55.512993 | orchestrator | Thursday 29 May 2025 00:37:55 +0000 (0:00:02.015) 0:00:15.682 ********** 2025-05-29 00:37:57.333544 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/iptables.sh', 'src': '/opt/configuration/network/iptables.sh'}) 2025-05-29 00:37:57.335412 | orchestrator | changed: [testbed-node-0] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-05-29 00:37:57.342406 | orchestrator | changed: [testbed-node-1] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-05-29 00:37:57.344439 | orchestrator | changed: [testbed-node-2] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-05-29 00:37:57.344463 | orchestrator | changed: [testbed-node-3] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-05-29 00:37:57.347312 | orchestrator | changed: [testbed-node-4] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-05-29 00:37:57.347791 | orchestrator | changed: [testbed-manager] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-05-29 00:37:57.349556 | orchestrator | changed: [testbed-node-5] => (item={'dest': 'routable.d/vxlan.sh', 'src': '/opt/configuration/network/vxlan.sh'}) 2025-05-29 00:37:57.350176 | orchestrator | 2025-05-29 00:37:57.350646 | orchestrator | TASK [osism.commons.network : Manage service networkd-dispatcher] ************** 2025-05-29 00:37:57.351171 | orchestrator | Thursday 29 May 2025 00:37:57 +0000 (0:00:01.827) 0:00:17.510 ********** 2025-05-29 00:37:58.846388 | orchestrator | ok: [testbed-manager] 2025-05-29 00:37:58.846524 | orchestrator | changed: [testbed-node-2] 2025-05-29 00:37:58.847314 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:37:58.851097 | orchestrator | changed: [testbed-node-1] 2025-05-29 00:37:58.852410 | orchestrator | changed: [testbed-node-3] 2025-05-29 00:37:58.853307 | orchestrator | changed: [testbed-node-4] 2025-05-29 00:37:58.854632 | orchestrator | changed: [testbed-node-5] 2025-05-29 00:37:58.854735 | orchestrator | 2025-05-29 00:37:58.855894 | orchestrator | TASK [osism.commons.network : Include cleanup tasks] *************************** 2025-05-29 00:37:58.856228 | orchestrator | Thursday 29 May 2025 00:37:58 +0000 (0:00:01.516) 0:00:19.026 ********** 2025-05-29 00:38:00.258291 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/commons/roles/network/tasks/cleanup-netplan.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-29 00:38:00.258403 | orchestrator | 2025-05-29 00:38:00.258755 | orchestrator | TASK [osism.commons.network : List existing configuration files] *************** 2025-05-29 00:38:00.259200 | orchestrator | Thursday 29 May 2025 00:38:00 +0000 (0:00:01.406) 0:00:20.433 ********** 2025-05-29 00:38:00.792872 | orchestrator | ok: [testbed-manager] 2025-05-29 00:38:01.224761 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:38:01.225689 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:38:01.229711 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:38:01.230598 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:38:01.232372 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:38:01.233427 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:38:01.234382 | orchestrator | 2025-05-29 00:38:01.235081 | orchestrator | TASK [osism.commons.network : Set network_configured_files fact] *************** 2025-05-29 00:38:01.235939 | orchestrator | Thursday 29 May 2025 00:38:01 +0000 (0:00:00.971) 0:00:21.405 ********** 2025-05-29 00:38:01.383089 | orchestrator | ok: [testbed-manager] 2025-05-29 00:38:01.465346 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:38:01.693657 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:38:01.776916 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:38:01.874348 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:38:02.013609 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:38:02.013837 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:38:02.015294 | orchestrator | 2025-05-29 00:38:02.015892 | orchestrator | TASK [osism.commons.network : Remove unused configuration files] *************** 2025-05-29 00:38:02.016674 | orchestrator | Thursday 29 May 2025 00:38:02 +0000 (0:00:00.786) 0:00:22.192 ********** 2025-05-29 00:38:02.436401 | orchestrator | changed: [testbed-manager] => (item=/etc/netplan/50-cloud-init.yaml) 2025-05-29 00:38:02.436579 | orchestrator | skipping: [testbed-manager] => (item=/etc/netplan/01-osism.yaml)  2025-05-29 00:38:02.522967 | orchestrator | changed: [testbed-node-0] => (item=/etc/netplan/50-cloud-init.yaml) 2025-05-29 00:38:02.523096 | orchestrator | skipping: [testbed-node-0] => (item=/etc/netplan/01-osism.yaml)  2025-05-29 00:38:02.984802 | orchestrator | changed: [testbed-node-1] => (item=/etc/netplan/50-cloud-init.yaml) 2025-05-29 00:38:02.985582 | orchestrator | skipping: [testbed-node-1] => (item=/etc/netplan/01-osism.yaml)  2025-05-29 00:38:02.985765 | orchestrator | changed: [testbed-node-2] => (item=/etc/netplan/50-cloud-init.yaml) 2025-05-29 00:38:02.987505 | orchestrator | skipping: [testbed-node-2] => (item=/etc/netplan/01-osism.yaml)  2025-05-29 00:38:02.987536 | orchestrator | changed: [testbed-node-3] => (item=/etc/netplan/50-cloud-init.yaml) 2025-05-29 00:38:02.987887 | orchestrator | skipping: [testbed-node-3] => (item=/etc/netplan/01-osism.yaml)  2025-05-29 00:38:02.988810 | orchestrator | changed: [testbed-node-4] => (item=/etc/netplan/50-cloud-init.yaml) 2025-05-29 00:38:02.989547 | orchestrator | skipping: [testbed-node-4] => (item=/etc/netplan/01-osism.yaml)  2025-05-29 00:38:02.990318 | orchestrator | changed: [testbed-node-5] => (item=/etc/netplan/50-cloud-init.yaml) 2025-05-29 00:38:02.991292 | orchestrator | skipping: [testbed-node-5] => (item=/etc/netplan/01-osism.yaml)  2025-05-29 00:38:02.991800 | orchestrator | 2025-05-29 00:38:02.992175 | orchestrator | TASK [osism.commons.network : Include dummy interfaces] ************************ 2025-05-29 00:38:02.993121 | orchestrator | Thursday 29 May 2025 00:38:02 +0000 (0:00:00.974) 0:00:23.166 ********** 2025-05-29 00:38:03.294392 | orchestrator | skipping: [testbed-manager] 2025-05-29 00:38:03.377654 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:38:03.457953 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:38:03.540819 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:38:03.624265 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:38:04.826213 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:38:04.827613 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:38:04.831286 | orchestrator | 2025-05-29 00:38:04.832186 | orchestrator | RUNNING HANDLER [osism.commons.network : Netplan configuration changed] ******** 2025-05-29 00:38:04.833183 | orchestrator | Thursday 29 May 2025 00:38:04 +0000 (0:00:01.837) 0:00:25.004 ********** 2025-05-29 00:38:04.989185 | orchestrator | skipping: [testbed-manager] 2025-05-29 00:38:05.070973 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:38:05.321560 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:38:05.420652 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:38:05.510407 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:38:05.542609 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:38:05.542691 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:38:05.543953 | orchestrator | 2025-05-29 00:38:05.544465 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-29 00:38:05.545093 | orchestrator | 2025-05-29 00:38:05 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-29 00:38:05.545289 | orchestrator | 2025-05-29 00:38:05 | INFO  | Please wait and do not abort execution. 2025-05-29 00:38:05.547645 | orchestrator | testbed-manager : ok=16  changed=3  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-29 00:38:05.548155 | orchestrator | testbed-node-0 : ok=16  changed=4  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-29 00:38:05.548709 | orchestrator | testbed-node-1 : ok=16  changed=4  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-29 00:38:05.549608 | orchestrator | testbed-node-2 : ok=16  changed=4  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-29 00:38:05.550435 | orchestrator | testbed-node-3 : ok=16  changed=4  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-29 00:38:05.551415 | orchestrator | testbed-node-4 : ok=16  changed=4  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-29 00:38:05.552113 | orchestrator | testbed-node-5 : ok=16  changed=4  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-29 00:38:05.552675 | orchestrator | 2025-05-29 00:38:05.553184 | orchestrator | Thursday 29 May 2025 00:38:05 +0000 (0:00:00.720) 0:00:25.724 ********** 2025-05-29 00:38:05.554824 | orchestrator | =============================================================================== 2025-05-29 00:38:05.555291 | orchestrator | osism.commons.network : Install required packages ----------------------- 2.14s 2025-05-29 00:38:05.555595 | orchestrator | osism.commons.network : Install package networkd-dispatcher ------------- 2.02s 2025-05-29 00:38:05.555999 | orchestrator | osism.commons.network : Prepare netplan configuration template ---------- 1.92s 2025-05-29 00:38:05.556376 | orchestrator | osism.commons.network : Include dummy interfaces ------------------------ 1.84s 2025-05-29 00:38:05.556730 | orchestrator | osism.commons.network : Copy dispatcher scripts ------------------------- 1.83s 2025-05-29 00:38:05.557088 | orchestrator | osism.commons.network : Remove ifupdown package ------------------------- 1.80s 2025-05-29 00:38:05.557555 | orchestrator | osism.commons.network : Copy netplan configuration ---------------------- 1.71s 2025-05-29 00:38:05.557779 | orchestrator | osism.commons.network : Manage service networkd-dispatcher -------------- 1.52s 2025-05-29 00:38:05.558590 | orchestrator | osism.commons.network : Include cleanup tasks --------------------------- 1.41s 2025-05-29 00:38:05.558933 | orchestrator | osism.commons.network : Include type specific tasks --------------------- 1.17s 2025-05-29 00:38:05.559616 | orchestrator | osism.commons.network : Check if path for interface file exists --------- 1.14s 2025-05-29 00:38:05.560268 | orchestrator | osism.commons.network : Create required directories --------------------- 1.10s 2025-05-29 00:38:05.560648 | orchestrator | osism.commons.network : Remove netplan configuration template ----------- 1.01s 2025-05-29 00:38:05.561152 | orchestrator | osism.commons.network : Remove unused configuration files --------------- 0.97s 2025-05-29 00:38:05.562936 | orchestrator | osism.commons.network : List existing configuration files --------------- 0.97s 2025-05-29 00:38:05.563918 | orchestrator | osism.commons.network : Set network_configured_files fact --------------- 0.79s 2025-05-29 00:38:05.564641 | orchestrator | osism.commons.network : Copy interfaces file ---------------------------- 0.78s 2025-05-29 00:38:05.565125 | orchestrator | osism.commons.network : Netplan configuration changed ------------------- 0.72s 2025-05-29 00:38:05.566065 | orchestrator | osism.commons.network : Gather variables for each operating system ------ 0.69s 2025-05-29 00:38:06.115953 | orchestrator | + osism apply wireguard 2025-05-29 00:38:07.489874 | orchestrator | 2025-05-29 00:38:07 | INFO  | Task ff6c501e-69d6-484b-a5bb-78e6cf3f2865 (wireguard) was prepared for execution. 2025-05-29 00:38:07.489986 | orchestrator | 2025-05-29 00:38:07 | INFO  | It takes a moment until task ff6c501e-69d6-484b-a5bb-78e6cf3f2865 (wireguard) has been started and output is visible here. 2025-05-29 00:38:10.534314 | orchestrator | 2025-05-29 00:38:10.534430 | orchestrator | PLAY [Apply role wireguard] **************************************************** 2025-05-29 00:38:10.536122 | orchestrator | 2025-05-29 00:38:10.536486 | orchestrator | TASK [osism.services.wireguard : Install iptables package] ********************* 2025-05-29 00:38:10.538599 | orchestrator | Thursday 29 May 2025 00:38:10 +0000 (0:00:00.162) 0:00:00.162 ********** 2025-05-29 00:38:11.964497 | orchestrator | ok: [testbed-manager] 2025-05-29 00:38:11.964711 | orchestrator | 2025-05-29 00:38:11.965169 | orchestrator | TASK [osism.services.wireguard : Install wireguard package] ******************** 2025-05-29 00:38:11.966151 | orchestrator | Thursday 29 May 2025 00:38:11 +0000 (0:00:01.433) 0:00:01.595 ********** 2025-05-29 00:38:18.239639 | orchestrator | changed: [testbed-manager] 2025-05-29 00:38:18.240414 | orchestrator | 2025-05-29 00:38:18.240776 | orchestrator | TASK [osism.services.wireguard : Create public and private key - server] ******* 2025-05-29 00:38:18.241653 | orchestrator | Thursday 29 May 2025 00:38:18 +0000 (0:00:06.274) 0:00:07.870 ********** 2025-05-29 00:38:18.756250 | orchestrator | changed: [testbed-manager] 2025-05-29 00:38:18.756880 | orchestrator | 2025-05-29 00:38:18.756910 | orchestrator | TASK [osism.services.wireguard : Create preshared key] ************************* 2025-05-29 00:38:18.757140 | orchestrator | Thursday 29 May 2025 00:38:18 +0000 (0:00:00.518) 0:00:08.388 ********** 2025-05-29 00:38:19.185168 | orchestrator | changed: [testbed-manager] 2025-05-29 00:38:19.185554 | orchestrator | 2025-05-29 00:38:19.186161 | orchestrator | TASK [osism.services.wireguard : Get preshared key] **************************** 2025-05-29 00:38:19.186419 | orchestrator | Thursday 29 May 2025 00:38:19 +0000 (0:00:00.428) 0:00:08.817 ********** 2025-05-29 00:38:19.687639 | orchestrator | ok: [testbed-manager] 2025-05-29 00:38:19.687760 | orchestrator | 2025-05-29 00:38:19.688562 | orchestrator | TASK [osism.services.wireguard : Get public key - server] ********************** 2025-05-29 00:38:19.689398 | orchestrator | Thursday 29 May 2025 00:38:19 +0000 (0:00:00.500) 0:00:09.317 ********** 2025-05-29 00:38:20.208421 | orchestrator | ok: [testbed-manager] 2025-05-29 00:38:20.208542 | orchestrator | 2025-05-29 00:38:20.208559 | orchestrator | TASK [osism.services.wireguard : Get private key - server] ********************* 2025-05-29 00:38:20.210335 | orchestrator | Thursday 29 May 2025 00:38:20 +0000 (0:00:00.521) 0:00:09.839 ********** 2025-05-29 00:38:20.648137 | orchestrator | ok: [testbed-manager] 2025-05-29 00:38:20.650409 | orchestrator | 2025-05-29 00:38:20.651030 | orchestrator | TASK [osism.services.wireguard : Copy wg0.conf configuration file] ************* 2025-05-29 00:38:20.652040 | orchestrator | Thursday 29 May 2025 00:38:20 +0000 (0:00:00.439) 0:00:10.278 ********** 2025-05-29 00:38:21.808686 | orchestrator | changed: [testbed-manager] 2025-05-29 00:38:21.809108 | orchestrator | 2025-05-29 00:38:21.809567 | orchestrator | TASK [osism.services.wireguard : Copy client configuration files] ************** 2025-05-29 00:38:21.810379 | orchestrator | Thursday 29 May 2025 00:38:21 +0000 (0:00:01.161) 0:00:11.440 ********** 2025-05-29 00:38:22.720465 | orchestrator | changed: [testbed-manager] => (item=None) 2025-05-29 00:38:22.720987 | orchestrator | changed: [testbed-manager] 2025-05-29 00:38:22.721543 | orchestrator | 2025-05-29 00:38:22.722447 | orchestrator | TASK [osism.services.wireguard : Manage wg-quick@wg0.service service] ********** 2025-05-29 00:38:22.722836 | orchestrator | Thursday 29 May 2025 00:38:22 +0000 (0:00:00.910) 0:00:12.350 ********** 2025-05-29 00:38:24.413361 | orchestrator | changed: [testbed-manager] 2025-05-29 00:38:24.414316 | orchestrator | 2025-05-29 00:38:24.415347 | orchestrator | RUNNING HANDLER [osism.services.wireguard : Restart wg0 service] *************** 2025-05-29 00:38:24.415876 | orchestrator | Thursday 29 May 2025 00:38:24 +0000 (0:00:01.693) 0:00:14.044 ********** 2025-05-29 00:38:25.298592 | orchestrator | changed: [testbed-manager] 2025-05-29 00:38:25.299416 | orchestrator | 2025-05-29 00:38:25.300488 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-29 00:38:25.300774 | orchestrator | 2025-05-29 00:38:25 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-29 00:38:25.300852 | orchestrator | 2025-05-29 00:38:25 | INFO  | Please wait and do not abort execution. 2025-05-29 00:38:25.302322 | orchestrator | testbed-manager : ok=11  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-29 00:38:25.303062 | orchestrator | 2025-05-29 00:38:25.303818 | orchestrator | Thursday 29 May 2025 00:38:25 +0000 (0:00:00.885) 0:00:14.929 ********** 2025-05-29 00:38:25.304806 | orchestrator | =============================================================================== 2025-05-29 00:38:25.305445 | orchestrator | osism.services.wireguard : Install wireguard package -------------------- 6.27s 2025-05-29 00:38:25.306356 | orchestrator | osism.services.wireguard : Manage wg-quick@wg0.service service ---------- 1.69s 2025-05-29 00:38:25.306986 | orchestrator | osism.services.wireguard : Install iptables package --------------------- 1.43s 2025-05-29 00:38:25.308018 | orchestrator | osism.services.wireguard : Copy wg0.conf configuration file ------------- 1.16s 2025-05-29 00:38:25.308663 | orchestrator | osism.services.wireguard : Copy client configuration files -------------- 0.91s 2025-05-29 00:38:25.309312 | orchestrator | osism.services.wireguard : Restart wg0 service -------------------------- 0.89s 2025-05-29 00:38:25.310133 | orchestrator | osism.services.wireguard : Get public key - server ---------------------- 0.52s 2025-05-29 00:38:25.310741 | orchestrator | osism.services.wireguard : Create public and private key - server ------- 0.52s 2025-05-29 00:38:25.311544 | orchestrator | osism.services.wireguard : Get preshared key ---------------------------- 0.50s 2025-05-29 00:38:25.312210 | orchestrator | osism.services.wireguard : Get private key - server --------------------- 0.44s 2025-05-29 00:38:25.312674 | orchestrator | osism.services.wireguard : Create preshared key ------------------------- 0.43s 2025-05-29 00:38:25.784730 | orchestrator | + sh -c /opt/configuration/scripts/prepare-wireguard-configuration.sh 2025-05-29 00:38:25.824064 | orchestrator | % Total % Received % Xferd Average Speed Time Time Time Current 2025-05-29 00:38:25.824150 | orchestrator | Dload Upload Total Spent Left Speed 2025-05-29 00:38:25.896843 | orchestrator | 0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0 100 13 100 13 0 0 178 0 --:--:-- --:--:-- --:--:-- 180 2025-05-29 00:38:25.909357 | orchestrator | + osism apply --environment custom workarounds 2025-05-29 00:38:27.327639 | orchestrator | 2025-05-29 00:38:27 | INFO  | Trying to run play workarounds in environment custom 2025-05-29 00:38:27.373059 | orchestrator | 2025-05-29 00:38:27 | INFO  | Task 8a950f62-c22f-44d8-8d58-367a300b7266 (workarounds) was prepared for execution. 2025-05-29 00:38:27.373161 | orchestrator | 2025-05-29 00:38:27 | INFO  | It takes a moment until task 8a950f62-c22f-44d8-8d58-367a300b7266 (workarounds) has been started and output is visible here. 2025-05-29 00:38:30.390682 | orchestrator | 2025-05-29 00:38:30.395138 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-29 00:38:30.396718 | orchestrator | 2025-05-29 00:38:30.399687 | orchestrator | TASK [Group hosts based on virtualization_role] ******************************** 2025-05-29 00:38:30.400915 | orchestrator | Thursday 29 May 2025 00:38:30 +0000 (0:00:00.144) 0:00:00.144 ********** 2025-05-29 00:38:30.550894 | orchestrator | changed: [testbed-manager] => (item=virtualization_role_guest) 2025-05-29 00:38:30.644922 | orchestrator | changed: [testbed-node-3] => (item=virtualization_role_guest) 2025-05-29 00:38:30.726125 | orchestrator | changed: [testbed-node-4] => (item=virtualization_role_guest) 2025-05-29 00:38:30.806603 | orchestrator | changed: [testbed-node-5] => (item=virtualization_role_guest) 2025-05-29 00:38:30.887506 | orchestrator | changed: [testbed-node-0] => (item=virtualization_role_guest) 2025-05-29 00:38:31.133997 | orchestrator | changed: [testbed-node-1] => (item=virtualization_role_guest) 2025-05-29 00:38:31.134660 | orchestrator | changed: [testbed-node-2] => (item=virtualization_role_guest) 2025-05-29 00:38:31.136810 | orchestrator | 2025-05-29 00:38:31.136843 | orchestrator | PLAY [Apply netplan configuration on the manager node] ************************* 2025-05-29 00:38:31.137015 | orchestrator | 2025-05-29 00:38:31.137617 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-05-29 00:38:31.138141 | orchestrator | Thursday 29 May 2025 00:38:31 +0000 (0:00:00.746) 0:00:00.890 ********** 2025-05-29 00:38:33.838459 | orchestrator | ok: [testbed-manager] 2025-05-29 00:38:33.838569 | orchestrator | 2025-05-29 00:38:33.839331 | orchestrator | PLAY [Apply netplan configuration on all other nodes] ************************** 2025-05-29 00:38:33.840581 | orchestrator | 2025-05-29 00:38:33.843534 | orchestrator | TASK [Apply netplan configuration] ********************************************* 2025-05-29 00:38:33.846299 | orchestrator | Thursday 29 May 2025 00:38:33 +0000 (0:00:02.698) 0:00:03.589 ********** 2025-05-29 00:38:35.605280 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:38:35.606183 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:38:35.606234 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:38:35.607973 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:38:35.607999 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:38:35.608477 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:38:35.609203 | orchestrator | 2025-05-29 00:38:35.610080 | orchestrator | PLAY [Add custom CA certificates to non-manager nodes] ************************* 2025-05-29 00:38:35.610502 | orchestrator | 2025-05-29 00:38:35.611290 | orchestrator | TASK [Copy custom CA certificates] ********************************************* 2025-05-29 00:38:35.611732 | orchestrator | Thursday 29 May 2025 00:38:35 +0000 (0:00:01.768) 0:00:05.357 ********** 2025-05-29 00:38:37.063031 | orchestrator | changed: [testbed-node-4] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-05-29 00:38:37.066382 | orchestrator | changed: [testbed-node-3] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-05-29 00:38:37.067846 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-05-29 00:38:37.069004 | orchestrator | changed: [testbed-node-5] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-05-29 00:38:37.069558 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-05-29 00:38:37.070105 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/certificates/ca/testbed.crt) 2025-05-29 00:38:37.070572 | orchestrator | 2025-05-29 00:38:37.071190 | orchestrator | TASK [Run update-ca-certificates] ********************************************** 2025-05-29 00:38:37.071752 | orchestrator | Thursday 29 May 2025 00:38:37 +0000 (0:00:01.457) 0:00:06.815 ********** 2025-05-29 00:38:40.836858 | orchestrator | changed: [testbed-node-4] 2025-05-29 00:38:40.837014 | orchestrator | changed: [testbed-node-3] 2025-05-29 00:38:40.838961 | orchestrator | changed: [testbed-node-5] 2025-05-29 00:38:40.839513 | orchestrator | changed: [testbed-node-1] 2025-05-29 00:38:40.840589 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:38:40.841081 | orchestrator | changed: [testbed-node-2] 2025-05-29 00:38:40.841464 | orchestrator | 2025-05-29 00:38:40.841960 | orchestrator | TASK [Run update-ca-trust] ***************************************************** 2025-05-29 00:38:40.842526 | orchestrator | Thursday 29 May 2025 00:38:40 +0000 (0:00:03.771) 0:00:10.587 ********** 2025-05-29 00:38:40.996234 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:38:41.080689 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:38:41.157107 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:38:41.387517 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:38:41.527342 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:38:41.527636 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:38:41.528376 | orchestrator | 2025-05-29 00:38:41.528733 | orchestrator | PLAY [Add a workaround service] ************************************************ 2025-05-29 00:38:41.529409 | orchestrator | 2025-05-29 00:38:41.529885 | orchestrator | TASK [Copy workarounds.sh scripts] ********************************************* 2025-05-29 00:38:41.535238 | orchestrator | Thursday 29 May 2025 00:38:41 +0000 (0:00:00.695) 0:00:11.282 ********** 2025-05-29 00:38:43.112198 | orchestrator | changed: [testbed-manager] 2025-05-29 00:38:43.115747 | orchestrator | changed: [testbed-node-3] 2025-05-29 00:38:43.116995 | orchestrator | changed: [testbed-node-4] 2025-05-29 00:38:43.117081 | orchestrator | changed: [testbed-node-5] 2025-05-29 00:38:43.118274 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:38:43.118973 | orchestrator | changed: [testbed-node-1] 2025-05-29 00:38:43.119619 | orchestrator | changed: [testbed-node-2] 2025-05-29 00:38:43.120349 | orchestrator | 2025-05-29 00:38:43.120808 | orchestrator | TASK [Copy workarounds systemd unit file] ************************************** 2025-05-29 00:38:43.121455 | orchestrator | Thursday 29 May 2025 00:38:43 +0000 (0:00:01.585) 0:00:12.868 ********** 2025-05-29 00:38:44.698110 | orchestrator | changed: [testbed-manager] 2025-05-29 00:38:44.702774 | orchestrator | changed: [testbed-node-3] 2025-05-29 00:38:44.704146 | orchestrator | changed: [testbed-node-4] 2025-05-29 00:38:44.705381 | orchestrator | changed: [testbed-node-5] 2025-05-29 00:38:44.705937 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:38:44.706731 | orchestrator | changed: [testbed-node-1] 2025-05-29 00:38:44.707333 | orchestrator | changed: [testbed-node-2] 2025-05-29 00:38:44.707629 | orchestrator | 2025-05-29 00:38:44.708510 | orchestrator | TASK [Reload systemd daemon] *************************************************** 2025-05-29 00:38:44.708716 | orchestrator | Thursday 29 May 2025 00:38:44 +0000 (0:00:01.580) 0:00:14.449 ********** 2025-05-29 00:38:46.093043 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:38:46.094101 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:38:46.094730 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:38:46.096423 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:38:46.097143 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:38:46.097747 | orchestrator | ok: [testbed-manager] 2025-05-29 00:38:46.098735 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:38:46.099189 | orchestrator | 2025-05-29 00:38:46.099663 | orchestrator | TASK [Enable workarounds.service (Debian)] ************************************* 2025-05-29 00:38:46.100256 | orchestrator | Thursday 29 May 2025 00:38:46 +0000 (0:00:01.398) 0:00:15.847 ********** 2025-05-29 00:38:47.849622 | orchestrator | changed: [testbed-manager] 2025-05-29 00:38:47.852929 | orchestrator | changed: [testbed-node-4] 2025-05-29 00:38:47.852980 | orchestrator | changed: [testbed-node-3] 2025-05-29 00:38:47.852993 | orchestrator | changed: [testbed-node-5] 2025-05-29 00:38:47.854997 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:38:47.855674 | orchestrator | changed: [testbed-node-1] 2025-05-29 00:38:47.859731 | orchestrator | changed: [testbed-node-2] 2025-05-29 00:38:47.860508 | orchestrator | 2025-05-29 00:38:47.861718 | orchestrator | TASK [Enable and start workarounds.service (RedHat)] *************************** 2025-05-29 00:38:47.863064 | orchestrator | Thursday 29 May 2025 00:38:47 +0000 (0:00:01.756) 0:00:17.604 ********** 2025-05-29 00:38:48.006100 | orchestrator | skipping: [testbed-manager] 2025-05-29 00:38:48.091145 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:38:48.168766 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:38:48.236871 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:38:48.479755 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:38:48.668477 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:38:48.669211 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:38:48.669582 | orchestrator | 2025-05-29 00:38:48.670377 | orchestrator | PLAY [On Ubuntu 24.04 install python3-docker from Debian Sid] ****************** 2025-05-29 00:38:48.671251 | orchestrator | 2025-05-29 00:38:48.671987 | orchestrator | TASK [Install python3-docker] ************************************************** 2025-05-29 00:38:48.672677 | orchestrator | Thursday 29 May 2025 00:38:48 +0000 (0:00:00.819) 0:00:18.423 ********** 2025-05-29 00:38:51.071089 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:38:51.071997 | orchestrator | ok: [testbed-manager] 2025-05-29 00:38:51.073162 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:38:51.074440 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:38:51.075283 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:38:51.077489 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:38:51.077975 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:38:51.078758 | orchestrator | 2025-05-29 00:38:51.079477 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-29 00:38:51.080051 | orchestrator | 2025-05-29 00:38:51 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-29 00:38:51.080066 | orchestrator | 2025-05-29 00:38:51 | INFO  | Please wait and do not abort execution. 2025-05-29 00:38:51.081025 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-29 00:38:51.081434 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-29 00:38:51.081933 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-29 00:38:51.082285 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-29 00:38:51.082719 | orchestrator | testbed-node-3 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-29 00:38:51.083283 | orchestrator | testbed-node-4 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-29 00:38:51.083519 | orchestrator | testbed-node-5 : ok=9  changed=6  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-29 00:38:51.083961 | orchestrator | 2025-05-29 00:38:51.084362 | orchestrator | Thursday 29 May 2025 00:38:51 +0000 (0:00:02.403) 0:00:20.827 ********** 2025-05-29 00:38:51.084816 | orchestrator | =============================================================================== 2025-05-29 00:38:51.085411 | orchestrator | Run update-ca-certificates ---------------------------------------------- 3.77s 2025-05-29 00:38:51.085677 | orchestrator | Apply netplan configuration --------------------------------------------- 2.70s 2025-05-29 00:38:51.086150 | orchestrator | Install python3-docker -------------------------------------------------- 2.40s 2025-05-29 00:38:51.086512 | orchestrator | Apply netplan configuration --------------------------------------------- 1.77s 2025-05-29 00:38:51.087238 | orchestrator | Enable workarounds.service (Debian) ------------------------------------- 1.76s 2025-05-29 00:38:51.087381 | orchestrator | Copy workarounds.sh scripts --------------------------------------------- 1.59s 2025-05-29 00:38:51.087848 | orchestrator | Copy workarounds systemd unit file -------------------------------------- 1.58s 2025-05-29 00:38:51.088270 | orchestrator | Copy custom CA certificates --------------------------------------------- 1.46s 2025-05-29 00:38:51.088684 | orchestrator | Reload systemd daemon --------------------------------------------------- 1.40s 2025-05-29 00:38:51.089031 | orchestrator | Enable and start workarounds.service (RedHat) --------------------------- 0.82s 2025-05-29 00:38:51.089397 | orchestrator | Group hosts based on virtualization_role -------------------------------- 0.75s 2025-05-29 00:38:51.089698 | orchestrator | Run update-ca-trust ----------------------------------------------------- 0.70s 2025-05-29 00:38:51.624582 | orchestrator | + osism apply reboot -l testbed-nodes -e ireallymeanit=yes 2025-05-29 00:38:53.030959 | orchestrator | 2025-05-29 00:38:53 | INFO  | Task c51429c5-ff6e-4ce8-99d8-d523acd3452d (reboot) was prepared for execution. 2025-05-29 00:38:53.031060 | orchestrator | 2025-05-29 00:38:53 | INFO  | It takes a moment until task c51429c5-ff6e-4ce8-99d8-d523acd3452d (reboot) has been started and output is visible here. 2025-05-29 00:38:56.019343 | orchestrator | 2025-05-29 00:38:56.021322 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-05-29 00:38:56.022093 | orchestrator | 2025-05-29 00:38:56.023164 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-05-29 00:38:56.024020 | orchestrator | Thursday 29 May 2025 00:38:56 +0000 (0:00:00.140) 0:00:00.140 ********** 2025-05-29 00:38:56.110721 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:38:56.110890 | orchestrator | 2025-05-29 00:38:56.111934 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-05-29 00:38:56.111967 | orchestrator | Thursday 29 May 2025 00:38:56 +0000 (0:00:00.093) 0:00:00.233 ********** 2025-05-29 00:38:56.993710 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:38:56.994406 | orchestrator | 2025-05-29 00:38:56.994843 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-05-29 00:38:56.996186 | orchestrator | Thursday 29 May 2025 00:38:56 +0000 (0:00:00.884) 0:00:01.118 ********** 2025-05-29 00:38:57.094358 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:38:57.094499 | orchestrator | 2025-05-29 00:38:57.096151 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-05-29 00:38:57.097163 | orchestrator | 2025-05-29 00:38:57.097219 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-05-29 00:38:57.097553 | orchestrator | Thursday 29 May 2025 00:38:57 +0000 (0:00:00.097) 0:00:01.215 ********** 2025-05-29 00:38:57.184365 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:38:57.184621 | orchestrator | 2025-05-29 00:38:57.185725 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-05-29 00:38:57.186510 | orchestrator | Thursday 29 May 2025 00:38:57 +0000 (0:00:00.093) 0:00:01.308 ********** 2025-05-29 00:38:57.815995 | orchestrator | changed: [testbed-node-1] 2025-05-29 00:38:57.816642 | orchestrator | 2025-05-29 00:38:57.817402 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-05-29 00:38:57.819090 | orchestrator | Thursday 29 May 2025 00:38:57 +0000 (0:00:00.632) 0:00:01.940 ********** 2025-05-29 00:38:57.922702 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:38:57.924099 | orchestrator | 2025-05-29 00:38:57.925948 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-05-29 00:38:57.925976 | orchestrator | 2025-05-29 00:38:57.925988 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-05-29 00:38:57.926599 | orchestrator | Thursday 29 May 2025 00:38:57 +0000 (0:00:00.104) 0:00:02.045 ********** 2025-05-29 00:38:58.024046 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:38:58.024465 | orchestrator | 2025-05-29 00:38:58.025085 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-05-29 00:38:58.026195 | orchestrator | Thursday 29 May 2025 00:38:58 +0000 (0:00:00.103) 0:00:02.148 ********** 2025-05-29 00:38:58.770832 | orchestrator | changed: [testbed-node-2] 2025-05-29 00:38:58.772839 | orchestrator | 2025-05-29 00:38:58.773674 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-05-29 00:38:58.773951 | orchestrator | Thursday 29 May 2025 00:38:58 +0000 (0:00:00.745) 0:00:02.893 ********** 2025-05-29 00:38:58.877101 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:38:58.878269 | orchestrator | 2025-05-29 00:38:58.879031 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-05-29 00:38:58.880108 | orchestrator | 2025-05-29 00:38:58.880334 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-05-29 00:38:58.881121 | orchestrator | Thursday 29 May 2025 00:38:58 +0000 (0:00:00.104) 0:00:02.998 ********** 2025-05-29 00:38:58.971681 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:38:58.971894 | orchestrator | 2025-05-29 00:38:58.973197 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-05-29 00:38:58.973726 | orchestrator | Thursday 29 May 2025 00:38:58 +0000 (0:00:00.097) 0:00:03.095 ********** 2025-05-29 00:38:59.654390 | orchestrator | changed: [testbed-node-3] 2025-05-29 00:38:59.654698 | orchestrator | 2025-05-29 00:38:59.655325 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-05-29 00:38:59.655959 | orchestrator | Thursday 29 May 2025 00:38:59 +0000 (0:00:00.680) 0:00:03.776 ********** 2025-05-29 00:38:59.765923 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:38:59.766662 | orchestrator | 2025-05-29 00:38:59.767222 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-05-29 00:38:59.767534 | orchestrator | 2025-05-29 00:38:59.767997 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-05-29 00:38:59.768529 | orchestrator | Thursday 29 May 2025 00:38:59 +0000 (0:00:00.110) 0:00:03.887 ********** 2025-05-29 00:38:59.860875 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:38:59.861008 | orchestrator | 2025-05-29 00:38:59.861582 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-05-29 00:38:59.862241 | orchestrator | Thursday 29 May 2025 00:38:59 +0000 (0:00:00.098) 0:00:03.985 ********** 2025-05-29 00:39:00.513312 | orchestrator | changed: [testbed-node-4] 2025-05-29 00:39:00.514011 | orchestrator | 2025-05-29 00:39:00.514240 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-05-29 00:39:00.514582 | orchestrator | Thursday 29 May 2025 00:39:00 +0000 (0:00:00.651) 0:00:04.637 ********** 2025-05-29 00:39:00.614399 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:39:00.614599 | orchestrator | 2025-05-29 00:39:00.615665 | orchestrator | PLAY [Reboot systems] ********************************************************** 2025-05-29 00:39:00.616647 | orchestrator | 2025-05-29 00:39:00.618127 | orchestrator | TASK [Exit playbook, if user did not mean to reboot systems] ******************* 2025-05-29 00:39:00.618158 | orchestrator | Thursday 29 May 2025 00:39:00 +0000 (0:00:00.100) 0:00:04.737 ********** 2025-05-29 00:39:00.706724 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:39:00.707125 | orchestrator | 2025-05-29 00:39:00.708359 | orchestrator | TASK [Reboot system - do not wait for the reboot to complete] ****************** 2025-05-29 00:39:00.711285 | orchestrator | Thursday 29 May 2025 00:39:00 +0000 (0:00:00.094) 0:00:04.831 ********** 2025-05-29 00:39:01.388965 | orchestrator | changed: [testbed-node-5] 2025-05-29 00:39:01.389161 | orchestrator | 2025-05-29 00:39:01.389560 | orchestrator | TASK [Reboot system - wait for the reboot to complete] ************************* 2025-05-29 00:39:01.389757 | orchestrator | Thursday 29 May 2025 00:39:01 +0000 (0:00:00.681) 0:00:05.512 ********** 2025-05-29 00:39:01.419922 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:39:01.420058 | orchestrator | 2025-05-29 00:39:01.420722 | orchestrator | 2025-05-29 00:39:01 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-29 00:39:01.420749 | orchestrator | 2025-05-29 00:39:01 | INFO  | Please wait and do not abort execution. 2025-05-29 00:39:01.420805 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-29 00:39:01.421143 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-29 00:39:01.421919 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-29 00:39:01.421941 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-29 00:39:01.422212 | orchestrator | testbed-node-3 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-29 00:39:01.422463 | orchestrator | testbed-node-4 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-29 00:39:01.422553 | orchestrator | testbed-node-5 : ok=1  changed=1  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-29 00:39:01.423273 | orchestrator | 2025-05-29 00:39:01.423421 | orchestrator | Thursday 29 May 2025 00:39:01 +0000 (0:00:00.033) 0:00:05.546 ********** 2025-05-29 00:39:01.423868 | orchestrator | =============================================================================== 2025-05-29 00:39:01.425021 | orchestrator | Reboot system - do not wait for the reboot to complete ------------------ 4.28s 2025-05-29 00:39:01.425412 | orchestrator | Exit playbook, if user did not mean to reboot systems ------------------- 0.58s 2025-05-29 00:39:01.425798 | orchestrator | Reboot system - wait for the reboot to complete ------------------------- 0.55s 2025-05-29 00:39:01.878749 | orchestrator | + osism apply wait-for-connection -l testbed-nodes -e ireallymeanit=yes 2025-05-29 00:39:03.291873 | orchestrator | 2025-05-29 00:39:03 | INFO  | Task e6f52a97-0542-406c-a98f-e58277ab74eb (wait-for-connection) was prepared for execution. 2025-05-29 00:39:03.291966 | orchestrator | 2025-05-29 00:39:03 | INFO  | It takes a moment until task e6f52a97-0542-406c-a98f-e58277ab74eb (wait-for-connection) has been started and output is visible here. 2025-05-29 00:39:06.393962 | orchestrator | 2025-05-29 00:39:06.394224 | orchestrator | PLAY [Wait until remote systems are reachable] ********************************* 2025-05-29 00:39:06.398069 | orchestrator | 2025-05-29 00:39:06.398098 | orchestrator | TASK [Wait until remote system is reachable] *********************************** 2025-05-29 00:39:06.398111 | orchestrator | Thursday 29 May 2025 00:39:06 +0000 (0:00:00.170) 0:00:00.170 ********** 2025-05-29 00:39:20.216316 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:39:20.216420 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:39:20.216436 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:39:20.216448 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:39:20.216459 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:39:20.216556 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:39:20.216870 | orchestrator | 2025-05-29 00:39:20.219470 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-29 00:39:20.219563 | orchestrator | 2025-05-29 00:39:20 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-29 00:39:20.219580 | orchestrator | 2025-05-29 00:39:20 | INFO  | Please wait and do not abort execution. 2025-05-29 00:39:20.220233 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-29 00:39:20.222591 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-29 00:39:20.222633 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-29 00:39:20.222647 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-29 00:39:20.223291 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-29 00:39:20.223768 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-29 00:39:20.224463 | orchestrator | 2025-05-29 00:39:20.224822 | orchestrator | Thursday 29 May 2025 00:39:20 +0000 (0:00:13.819) 0:00:13.990 ********** 2025-05-29 00:39:20.225300 | orchestrator | =============================================================================== 2025-05-29 00:39:20.225827 | orchestrator | Wait until remote system is reachable ---------------------------------- 13.82s 2025-05-29 00:39:20.773567 | orchestrator | + osism apply hddtemp 2025-05-29 00:39:22.200849 | orchestrator | 2025-05-29 00:39:22 | INFO  | Task 556b9032-4683-4e7a-8159-89e0fc4d3dfd (hddtemp) was prepared for execution. 2025-05-29 00:39:22.200922 | orchestrator | 2025-05-29 00:39:22 | INFO  | It takes a moment until task 556b9032-4683-4e7a-8159-89e0fc4d3dfd (hddtemp) has been started and output is visible here. 2025-05-29 00:39:25.242633 | orchestrator | 2025-05-29 00:39:25.245493 | orchestrator | PLAY [Apply role hddtemp] ****************************************************** 2025-05-29 00:39:25.245589 | orchestrator | 2025-05-29 00:39:25.245689 | orchestrator | TASK [osism.services.hddtemp : Gather variables for each operating system] ***** 2025-05-29 00:39:25.247086 | orchestrator | Thursday 29 May 2025 00:39:25 +0000 (0:00:00.194) 0:00:00.194 ********** 2025-05-29 00:39:25.386230 | orchestrator | ok: [testbed-manager] 2025-05-29 00:39:25.460015 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:39:25.534974 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:39:25.609449 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:39:25.683253 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:39:25.894230 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:39:25.894363 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:39:25.894930 | orchestrator | 2025-05-29 00:39:25.895720 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific install tasks] **** 2025-05-29 00:39:25.899475 | orchestrator | Thursday 29 May 2025 00:39:25 +0000 (0:00:00.652) 0:00:00.846 ********** 2025-05-29 00:39:27.066397 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-29 00:39:27.066584 | orchestrator | 2025-05-29 00:39:27.070594 | orchestrator | TASK [osism.services.hddtemp : Remove hddtemp package] ************************* 2025-05-29 00:39:27.070684 | orchestrator | Thursday 29 May 2025 00:39:27 +0000 (0:00:01.170) 0:00:02.017 ********** 2025-05-29 00:39:29.076021 | orchestrator | ok: [testbed-manager] 2025-05-29 00:39:29.077108 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:39:29.078300 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:39:29.079801 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:39:29.080490 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:39:29.082575 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:39:29.083055 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:39:29.084009 | orchestrator | 2025-05-29 00:39:29.084449 | orchestrator | TASK [osism.services.hddtemp : Enable Kernel Module drivetemp] ***************** 2025-05-29 00:39:29.085856 | orchestrator | Thursday 29 May 2025 00:39:29 +0000 (0:00:02.010) 0:00:04.027 ********** 2025-05-29 00:39:29.673568 | orchestrator | changed: [testbed-manager] 2025-05-29 00:39:29.754695 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:39:30.220352 | orchestrator | changed: [testbed-node-1] 2025-05-29 00:39:30.222383 | orchestrator | changed: [testbed-node-2] 2025-05-29 00:39:30.226862 | orchestrator | changed: [testbed-node-3] 2025-05-29 00:39:30.226923 | orchestrator | changed: [testbed-node-4] 2025-05-29 00:39:30.226942 | orchestrator | changed: [testbed-node-5] 2025-05-29 00:39:30.226961 | orchestrator | 2025-05-29 00:39:30.227993 | orchestrator | TASK [osism.services.hddtemp : Check if drivetemp module is available] ********* 2025-05-29 00:39:30.229896 | orchestrator | Thursday 29 May 2025 00:39:30 +0000 (0:00:01.143) 0:00:05.170 ********** 2025-05-29 00:39:31.514654 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:39:31.514922 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:39:31.515635 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:39:31.518950 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:39:31.519881 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:39:31.520666 | orchestrator | ok: [testbed-manager] 2025-05-29 00:39:31.521367 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:39:31.521943 | orchestrator | 2025-05-29 00:39:31.523486 | orchestrator | TASK [osism.services.hddtemp : Load Kernel Module drivetemp] ******************* 2025-05-29 00:39:31.523544 | orchestrator | Thursday 29 May 2025 00:39:31 +0000 (0:00:01.294) 0:00:06.465 ********** 2025-05-29 00:39:31.769868 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:39:31.853508 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:39:31.938259 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:39:32.009313 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:39:32.122419 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:39:32.125840 | orchestrator | changed: [testbed-manager] 2025-05-29 00:39:32.128410 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:39:32.129607 | orchestrator | 2025-05-29 00:39:32.131355 | orchestrator | TASK [osism.services.hddtemp : Install lm-sensors] ***************************** 2025-05-29 00:39:32.132439 | orchestrator | Thursday 29 May 2025 00:39:32 +0000 (0:00:00.608) 0:00:07.074 ********** 2025-05-29 00:39:45.142464 | orchestrator | changed: [testbed-manager] 2025-05-29 00:39:45.142627 | orchestrator | changed: [testbed-node-4] 2025-05-29 00:39:45.142643 | orchestrator | changed: [testbed-node-1] 2025-05-29 00:39:45.142655 | orchestrator | changed: [testbed-node-2] 2025-05-29 00:39:45.142785 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:39:45.143795 | orchestrator | changed: [testbed-node-5] 2025-05-29 00:39:45.143893 | orchestrator | changed: [testbed-node-3] 2025-05-29 00:39:45.144856 | orchestrator | 2025-05-29 00:39:45.145468 | orchestrator | TASK [osism.services.hddtemp : Include distribution specific service tasks] **** 2025-05-29 00:39:45.145822 | orchestrator | Thursday 29 May 2025 00:39:45 +0000 (0:00:13.012) 0:00:20.087 ********** 2025-05-29 00:39:46.313029 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/hddtemp/tasks/service-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-29 00:39:46.313699 | orchestrator | 2025-05-29 00:39:46.314640 | orchestrator | TASK [osism.services.hddtemp : Manage lm-sensors service] ********************** 2025-05-29 00:39:46.315639 | orchestrator | Thursday 29 May 2025 00:39:46 +0000 (0:00:01.176) 0:00:21.263 ********** 2025-05-29 00:39:48.134269 | orchestrator | changed: [testbed-manager] 2025-05-29 00:39:48.135297 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:39:48.135343 | orchestrator | changed: [testbed-node-1] 2025-05-29 00:39:48.138363 | orchestrator | changed: [testbed-node-2] 2025-05-29 00:39:48.138396 | orchestrator | changed: [testbed-node-3] 2025-05-29 00:39:48.138407 | orchestrator | changed: [testbed-node-4] 2025-05-29 00:39:48.138660 | orchestrator | changed: [testbed-node-5] 2025-05-29 00:39:48.141487 | orchestrator | 2025-05-29 00:39:48.141527 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-29 00:39:48.141599 | orchestrator | testbed-manager : ok=9  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-29 00:39:48.141656 | orchestrator | 2025-05-29 00:39:48 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-29 00:39:48.141676 | orchestrator | 2025-05-29 00:39:48 | INFO  | Please wait and do not abort execution. 2025-05-29 00:39:48.141773 | orchestrator | testbed-node-0 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-29 00:39:48.142503 | orchestrator | testbed-node-1 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-29 00:39:48.142535 | orchestrator | testbed-node-2 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-29 00:39:48.142972 | orchestrator | testbed-node-3 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-29 00:39:48.144646 | orchestrator | testbed-node-4 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-29 00:39:48.145311 | orchestrator | testbed-node-5 : ok=8  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-29 00:39:48.145531 | orchestrator | 2025-05-29 00:39:48.146104 | orchestrator | Thursday 29 May 2025 00:39:48 +0000 (0:00:01.823) 0:00:23.087 ********** 2025-05-29 00:39:48.146771 | orchestrator | =============================================================================== 2025-05-29 00:39:48.146980 | orchestrator | osism.services.hddtemp : Install lm-sensors ---------------------------- 13.01s 2025-05-29 00:39:48.148125 | orchestrator | osism.services.hddtemp : Remove hddtemp package ------------------------- 2.01s 2025-05-29 00:39:48.148594 | orchestrator | osism.services.hddtemp : Manage lm-sensors service ---------------------- 1.82s 2025-05-29 00:39:48.149344 | orchestrator | osism.services.hddtemp : Check if drivetemp module is available --------- 1.29s 2025-05-29 00:39:48.149746 | orchestrator | osism.services.hddtemp : Include distribution specific service tasks ---- 1.18s 2025-05-29 00:39:48.151364 | orchestrator | osism.services.hddtemp : Include distribution specific install tasks ---- 1.17s 2025-05-29 00:39:48.151406 | orchestrator | osism.services.hddtemp : Enable Kernel Module drivetemp ----------------- 1.14s 2025-05-29 00:39:48.151425 | orchestrator | osism.services.hddtemp : Gather variables for each operating system ----- 0.65s 2025-05-29 00:39:48.151685 | orchestrator | osism.services.hddtemp : Load Kernel Module drivetemp ------------------- 0.61s 2025-05-29 00:39:48.741247 | orchestrator | + sudo systemctl restart docker-compose@manager 2025-05-29 00:39:50.255865 | orchestrator | + [[ ceph-ansible == \c\e\p\h\-\a\n\s\i\b\l\e ]] 2025-05-29 00:39:50.256822 | orchestrator | + wait_for_container_healthy 60 ceph-ansible 2025-05-29 00:39:50.256870 | orchestrator | + local max_attempts=60 2025-05-29 00:39:50.256915 | orchestrator | + local name=ceph-ansible 2025-05-29 00:39:50.256929 | orchestrator | + local attempt_num=1 2025-05-29 00:39:50.256955 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' ceph-ansible 2025-05-29 00:39:50.294155 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-05-29 00:39:50.294244 | orchestrator | + wait_for_container_healthy 60 kolla-ansible 2025-05-29 00:39:50.294258 | orchestrator | + local max_attempts=60 2025-05-29 00:39:50.294270 | orchestrator | + local name=kolla-ansible 2025-05-29 00:39:50.294281 | orchestrator | + local attempt_num=1 2025-05-29 00:39:50.295214 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' kolla-ansible 2025-05-29 00:39:50.321364 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-05-29 00:39:50.321467 | orchestrator | + wait_for_container_healthy 60 osism-ansible 2025-05-29 00:39:50.321485 | orchestrator | + local max_attempts=60 2025-05-29 00:39:50.321524 | orchestrator | + local name=osism-ansible 2025-05-29 00:39:50.321536 | orchestrator | + local attempt_num=1 2025-05-29 00:39:50.321633 | orchestrator | ++ /usr/bin/docker inspect -f '{{.State.Health.Status}}' osism-ansible 2025-05-29 00:39:50.344626 | orchestrator | + [[ healthy == \h\e\a\l\t\h\y ]] 2025-05-29 00:39:50.344754 | orchestrator | + [[ true == \t\r\u\e ]] 2025-05-29 00:39:50.344777 | orchestrator | + sh -c /opt/configuration/scripts/disable-ara.sh 2025-05-29 00:39:50.516501 | orchestrator | ARA in ceph-ansible already disabled. 2025-05-29 00:39:50.657472 | orchestrator | ARA in kolla-ansible already disabled. 2025-05-29 00:39:50.850267 | orchestrator | ARA in osism-ansible already disabled. 2025-05-29 00:39:51.006493 | orchestrator | ARA in osism-kubernetes already disabled. 2025-05-29 00:39:51.007050 | orchestrator | + osism apply gather-facts 2025-05-29 00:39:52.413589 | orchestrator | 2025-05-29 00:39:52 | INFO  | Task 9e66254f-d310-4e19-ae9b-85f688200a60 (gather-facts) was prepared for execution. 2025-05-29 00:39:52.413834 | orchestrator | 2025-05-29 00:39:52 | INFO  | It takes a moment until task 9e66254f-d310-4e19-ae9b-85f688200a60 (gather-facts) has been started and output is visible here. 2025-05-29 00:39:55.459877 | orchestrator | 2025-05-29 00:39:55.459977 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-05-29 00:39:55.459994 | orchestrator | 2025-05-29 00:39:55.460005 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-05-29 00:39:55.460017 | orchestrator | Thursday 29 May 2025 00:39:55 +0000 (0:00:00.162) 0:00:00.162 ********** 2025-05-29 00:40:00.519201 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:40:00.520086 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:40:00.520141 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:40:00.520204 | orchestrator | ok: [testbed-manager] 2025-05-29 00:40:00.520896 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:40:00.521257 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:40:00.525339 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:40:00.525495 | orchestrator | 2025-05-29 00:40:00.525969 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-05-29 00:40:00.526623 | orchestrator | 2025-05-29 00:40:00.527163 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-05-29 00:40:00.527569 | orchestrator | Thursday 29 May 2025 00:40:00 +0000 (0:00:05.064) 0:00:05.227 ********** 2025-05-29 00:40:00.671487 | orchestrator | skipping: [testbed-manager] 2025-05-29 00:40:00.755014 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:40:00.827864 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:40:00.900425 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:40:00.994385 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:40:01.027908 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:40:01.028078 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:40:01.028483 | orchestrator | 2025-05-29 00:40:01.029778 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-29 00:40:01.029825 | orchestrator | 2025-05-29 00:40:01 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-29 00:40:01.029842 | orchestrator | 2025-05-29 00:40:01 | INFO  | Please wait and do not abort execution. 2025-05-29 00:40:01.030309 | orchestrator | testbed-manager : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-29 00:40:01.030597 | orchestrator | testbed-node-0 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-29 00:40:01.030910 | orchestrator | testbed-node-1 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-29 00:40:01.031233 | orchestrator | testbed-node-2 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-29 00:40:01.031626 | orchestrator | testbed-node-3 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-29 00:40:01.032570 | orchestrator | testbed-node-4 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-29 00:40:01.032933 | orchestrator | testbed-node-5 : ok=1  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-29 00:40:01.033191 | orchestrator | 2025-05-29 00:40:01.033555 | orchestrator | Thursday 29 May 2025 00:40:01 +0000 (0:00:00.509) 0:00:05.736 ********** 2025-05-29 00:40:01.034124 | orchestrator | =============================================================================== 2025-05-29 00:40:01.034257 | orchestrator | Gathers facts about hosts ----------------------------------------------- 5.06s 2025-05-29 00:40:01.034546 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.51s 2025-05-29 00:40:01.572947 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/001-helpers.sh /usr/local/bin/deploy-helper 2025-05-29 00:40:01.590787 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/500-kubernetes.sh /usr/local/bin/deploy-kubernetes 2025-05-29 00:40:01.609317 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/510-clusterapi.sh /usr/local/bin/deploy-kubernetes-clusterapi 2025-05-29 00:40:01.622879 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-ansible.sh /usr/local/bin/deploy-ceph-with-ansible 2025-05-29 00:40:01.640944 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/100-ceph-with-rook.sh /usr/local/bin/deploy-ceph-with-rook 2025-05-29 00:40:01.659827 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/200-infrastructure.sh /usr/local/bin/deploy-infrastructure 2025-05-29 00:40:01.672310 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/300-openstack.sh /usr/local/bin/deploy-openstack 2025-05-29 00:40:01.693973 | orchestrator | + sudo ln -sf /opt/configuration/scripts/deploy/400-monitoring.sh /usr/local/bin/deploy-monitoring 2025-05-29 00:40:01.709546 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/500-kubernetes.sh /usr/local/bin/upgrade-kubernetes 2025-05-29 00:40:01.720920 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/510-clusterapi.sh /usr/local/bin/upgrade-kubernetes-clusterapi 2025-05-29 00:40:01.735647 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-ansible.sh /usr/local/bin/upgrade-ceph-with-ansible 2025-05-29 00:40:01.757586 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/100-ceph-with-rook.sh /usr/local/bin/upgrade-ceph-with-rook 2025-05-29 00:40:01.776727 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/200-infrastructure.sh /usr/local/bin/upgrade-infrastructure 2025-05-29 00:40:01.794432 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/300-openstack.sh /usr/local/bin/upgrade-openstack 2025-05-29 00:40:01.813107 | orchestrator | + sudo ln -sf /opt/configuration/scripts/upgrade/400-monitoring.sh /usr/local/bin/upgrade-monitoring 2025-05-29 00:40:01.831989 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/300-openstack.sh /usr/local/bin/bootstrap-openstack 2025-05-29 00:40:01.844914 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/301-openstack-octavia-amhpora-image.sh /usr/local/bin/bootstrap-octavia 2025-05-29 00:40:01.860575 | orchestrator | + sudo ln -sf /opt/configuration/scripts/bootstrap/302-openstack-k8s-clusterapi-images.sh /usr/local/bin/bootstrap-clusterapi 2025-05-29 00:40:01.874209 | orchestrator | + sudo ln -sf /opt/configuration/scripts/disable-local-registry.sh /usr/local/bin/disable-local-registry 2025-05-29 00:40:01.891758 | orchestrator | + sudo ln -sf /opt/configuration/scripts/pull-images.sh /usr/local/bin/pull-images 2025-05-29 00:40:01.905132 | orchestrator | + [[ false == \t\r\u\e ]] 2025-05-29 00:40:02.264322 | orchestrator | ok: Runtime: 0:25:19.567067 2025-05-29 00:40:02.383022 | 2025-05-29 00:40:02.383170 | TASK [Deploy services] 2025-05-29 00:40:02.915016 | orchestrator | skipping: Conditional result was False 2025-05-29 00:40:02.931419 | 2025-05-29 00:40:02.931606 | TASK [Deploy in a nutshell] 2025-05-29 00:40:03.644952 | orchestrator | + set -e 2025-05-29 00:40:03.645153 | orchestrator | + source /opt/configuration/scripts/include.sh 2025-05-29 00:40:03.645180 | orchestrator | ++ export INTERACTIVE=false 2025-05-29 00:40:03.645202 | orchestrator | ++ INTERACTIVE=false 2025-05-29 00:40:03.645215 | orchestrator | ++ export OSISM_APPLY_RETRY=1 2025-05-29 00:40:03.645228 | orchestrator | ++ OSISM_APPLY_RETRY=1 2025-05-29 00:40:03.645242 | orchestrator | + source /opt/manager-vars.sh 2025-05-29 00:40:03.645288 | orchestrator | ++ export NUMBER_OF_NODES=6 2025-05-29 00:40:03.645317 | orchestrator | ++ NUMBER_OF_NODES=6 2025-05-29 00:40:03.645332 | orchestrator | ++ export CEPH_VERSION=reef 2025-05-29 00:40:03.645347 | orchestrator | ++ CEPH_VERSION=reef 2025-05-29 00:40:03.645360 | orchestrator | ++ export CONFIGURATION_VERSION=main 2025-05-29 00:40:03.645379 | orchestrator | ++ CONFIGURATION_VERSION=main 2025-05-29 00:40:03.645390 | orchestrator | ++ export MANAGER_VERSION=8.1.0 2025-05-29 00:40:03.645411 | orchestrator | ++ MANAGER_VERSION=8.1.0 2025-05-29 00:40:03.645423 | orchestrator | ++ export OPENSTACK_VERSION=2024.2 2025-05-29 00:40:03.645437 | orchestrator | ++ OPENSTACK_VERSION=2024.2 2025-05-29 00:40:03.645448 | orchestrator | ++ export ARA=false 2025-05-29 00:40:03.645460 | orchestrator | ++ ARA=false 2025-05-29 00:40:03.645471 | orchestrator | ++ export TEMPEST=false 2025-05-29 00:40:03.645487 | orchestrator | ++ TEMPEST=false 2025-05-29 00:40:03.645499 | orchestrator | ++ export IS_ZUUL=true 2025-05-29 00:40:03.645509 | orchestrator | ++ IS_ZUUL=true 2025-05-29 00:40:03.645521 | orchestrator | ++ export MANAGER_PUBLIC_IP_ADDRESS=81.163.193.2 2025-05-29 00:40:03.645533 | orchestrator | ++ MANAGER_PUBLIC_IP_ADDRESS=81.163.193.2 2025-05-29 00:40:03.645544 | orchestrator | ++ export EXTERNAL_API=false 2025-05-29 00:40:03.645569 | orchestrator | ++ EXTERNAL_API=false 2025-05-29 00:40:03.645580 | orchestrator | ++ export IMAGE_USER=ubuntu 2025-05-29 00:40:03.645591 | orchestrator | ++ IMAGE_USER=ubuntu 2025-05-29 00:40:03.645602 | orchestrator | ++ export IMAGE_NODE_USER=ubuntu 2025-05-29 00:40:03.645613 | orchestrator | ++ IMAGE_NODE_USER=ubuntu 2025-05-29 00:40:03.645624 | orchestrator | ++ export CEPH_STACK=ceph-ansible 2025-05-29 00:40:03.645635 | orchestrator | ++ CEPH_STACK=ceph-ansible 2025-05-29 00:40:03.645647 | orchestrator | + echo 2025-05-29 00:40:03.645658 | orchestrator | 2025-05-29 00:40:03.645670 | orchestrator | # PULL IMAGES 2025-05-29 00:40:03.645725 | orchestrator | 2025-05-29 00:40:03.645736 | orchestrator | + echo '# PULL IMAGES' 2025-05-29 00:40:03.645748 | orchestrator | + echo 2025-05-29 00:40:03.646553 | orchestrator | ++ semver 8.1.0 7.0.0 2025-05-29 00:40:03.705656 | orchestrator | + [[ 1 -ge 0 ]] 2025-05-29 00:40:03.705740 | orchestrator | + osism apply -r 2 -e custom pull-images 2025-05-29 00:40:05.096659 | orchestrator | 2025-05-29 00:40:05 | INFO  | Trying to run play pull-images in environment custom 2025-05-29 00:40:05.146143 | orchestrator | 2025-05-29 00:40:05 | INFO  | Task 4900a16f-286b-4da9-8535-c3528393c87e (pull-images) was prepared for execution. 2025-05-29 00:40:05.146259 | orchestrator | 2025-05-29 00:40:05 | INFO  | It takes a moment until task 4900a16f-286b-4da9-8535-c3528393c87e (pull-images) has been started and output is visible here. 2025-05-29 00:40:08.210788 | orchestrator | 2025-05-29 00:40:08.210900 | orchestrator | PLAY [Pull images] ************************************************************* 2025-05-29 00:40:08.210922 | orchestrator | 2025-05-29 00:40:08.210942 | orchestrator | TASK [Pull keystone image] ***************************************************** 2025-05-29 00:40:08.210996 | orchestrator | Thursday 29 May 2025 00:40:08 +0000 (0:00:00.138) 0:00:00.138 ********** 2025-05-29 00:40:45.764767 | orchestrator | changed: [testbed-manager] 2025-05-29 00:40:45.764911 | orchestrator | 2025-05-29 00:40:45.764928 | orchestrator | TASK [Pull other images] ******************************************************* 2025-05-29 00:40:45.764941 | orchestrator | Thursday 29 May 2025 00:40:45 +0000 (0:00:37.551) 0:00:37.689 ********** 2025-05-29 00:41:32.906416 | orchestrator | changed: [testbed-manager] => (item=aodh) 2025-05-29 00:41:32.906627 | orchestrator | changed: [testbed-manager] => (item=barbican) 2025-05-29 00:41:32.906649 | orchestrator | changed: [testbed-manager] => (item=ceilometer) 2025-05-29 00:41:32.906662 | orchestrator | changed: [testbed-manager] => (item=cinder) 2025-05-29 00:41:32.906673 | orchestrator | changed: [testbed-manager] => (item=common) 2025-05-29 00:41:32.906684 | orchestrator | changed: [testbed-manager] => (item=designate) 2025-05-29 00:41:32.906696 | orchestrator | changed: [testbed-manager] => (item=glance) 2025-05-29 00:41:32.906707 | orchestrator | changed: [testbed-manager] => (item=grafana) 2025-05-29 00:41:32.906753 | orchestrator | changed: [testbed-manager] => (item=horizon) 2025-05-29 00:41:32.906765 | orchestrator | changed: [testbed-manager] => (item=ironic) 2025-05-29 00:41:32.906780 | orchestrator | changed: [testbed-manager] => (item=loadbalancer) 2025-05-29 00:41:32.906791 | orchestrator | changed: [testbed-manager] => (item=magnum) 2025-05-29 00:41:32.906802 | orchestrator | changed: [testbed-manager] => (item=mariadb) 2025-05-29 00:41:32.906813 | orchestrator | changed: [testbed-manager] => (item=memcached) 2025-05-29 00:41:32.906824 | orchestrator | changed: [testbed-manager] => (item=neutron) 2025-05-29 00:41:32.906835 | orchestrator | changed: [testbed-manager] => (item=nova) 2025-05-29 00:41:32.912122 | orchestrator | changed: [testbed-manager] => (item=octavia) 2025-05-29 00:41:32.912165 | orchestrator | changed: [testbed-manager] => (item=opensearch) 2025-05-29 00:41:32.912178 | orchestrator | changed: [testbed-manager] => (item=openvswitch) 2025-05-29 00:41:32.912189 | orchestrator | changed: [testbed-manager] => (item=ovn) 2025-05-29 00:41:32.912817 | orchestrator | changed: [testbed-manager] => (item=placement) 2025-05-29 00:41:32.913817 | orchestrator | changed: [testbed-manager] => (item=rabbitmq) 2025-05-29 00:41:32.914863 | orchestrator | changed: [testbed-manager] => (item=redis) 2025-05-29 00:41:32.919224 | orchestrator | changed: [testbed-manager] => (item=skyline) 2025-05-29 00:41:32.920271 | orchestrator | 2025-05-29 00:41:32.921053 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-29 00:41:32.923361 | orchestrator | 2025-05-29 00:41:32 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-29 00:41:32.923386 | orchestrator | 2025-05-29 00:41:32 | INFO  | Please wait and do not abort execution. 2025-05-29 00:41:32.924234 | orchestrator | testbed-manager : ok=2  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-29 00:41:32.924569 | orchestrator | 2025-05-29 00:41:32.925140 | orchestrator | Thursday 29 May 2025 00:41:32 +0000 (0:00:47.145) 0:01:24.835 ********** 2025-05-29 00:41:32.925337 | orchestrator | =============================================================================== 2025-05-29 00:41:32.925833 | orchestrator | Pull other images ------------------------------------------------------ 47.15s 2025-05-29 00:41:32.926199 | orchestrator | Pull keystone image ---------------------------------------------------- 37.55s 2025-05-29 00:41:34.749027 | orchestrator | 2025-05-29 00:41:34 | INFO  | Trying to run play wipe-partitions in environment custom 2025-05-29 00:41:34.790787 | orchestrator | 2025-05-29 00:41:34 | INFO  | Task 6e686504-6f75-4548-b199-34ba83dc05ab (wipe-partitions) was prepared for execution. 2025-05-29 00:41:34.790883 | orchestrator | 2025-05-29 00:41:34 | INFO  | It takes a moment until task 6e686504-6f75-4548-b199-34ba83dc05ab (wipe-partitions) has been started and output is visible here. 2025-05-29 00:41:37.761285 | orchestrator | 2025-05-29 00:41:37.762268 | orchestrator | PLAY [Wipe partitions] ********************************************************* 2025-05-29 00:41:37.762781 | orchestrator | 2025-05-29 00:41:37.763733 | orchestrator | TASK [Find all logical devices owned by UID 167] ******************************* 2025-05-29 00:41:37.763997 | orchestrator | Thursday 29 May 2025 00:41:37 +0000 (0:00:00.130) 0:00:00.130 ********** 2025-05-29 00:41:38.356994 | orchestrator | changed: [testbed-node-4] 2025-05-29 00:41:38.358291 | orchestrator | changed: [testbed-node-3] 2025-05-29 00:41:38.363002 | orchestrator | changed: [testbed-node-5] 2025-05-29 00:41:38.363597 | orchestrator | 2025-05-29 00:41:38.364387 | orchestrator | TASK [Remove all rook related logical devices] ********************************* 2025-05-29 00:41:38.364802 | orchestrator | Thursday 29 May 2025 00:41:38 +0000 (0:00:00.596) 0:00:00.726 ********** 2025-05-29 00:41:38.504979 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:41:38.594624 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:41:38.595002 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:41:38.595680 | orchestrator | 2025-05-29 00:41:38.596793 | orchestrator | TASK [Find all logical devices with prefix ceph] ******************************* 2025-05-29 00:41:38.596827 | orchestrator | Thursday 29 May 2025 00:41:38 +0000 (0:00:00.238) 0:00:00.965 ********** 2025-05-29 00:41:39.314249 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:41:39.314643 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:41:39.317397 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:41:39.317431 | orchestrator | 2025-05-29 00:41:39.317794 | orchestrator | TASK [Remove all ceph related logical devices] ********************************* 2025-05-29 00:41:39.321334 | orchestrator | Thursday 29 May 2025 00:41:39 +0000 (0:00:00.712) 0:00:01.677 ********** 2025-05-29 00:41:39.480784 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:41:39.590367 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:41:39.590616 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:41:39.591350 | orchestrator | 2025-05-29 00:41:39.594408 | orchestrator | TASK [Check device availability] *********************************************** 2025-05-29 00:41:39.594957 | orchestrator | Thursday 29 May 2025 00:41:39 +0000 (0:00:00.282) 0:00:01.959 ********** 2025-05-29 00:41:40.801387 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-05-29 00:41:40.802607 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-05-29 00:41:40.803162 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-05-29 00:41:40.805515 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-05-29 00:41:40.805780 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-05-29 00:41:40.807340 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-05-29 00:41:40.808265 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-05-29 00:41:40.808781 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-05-29 00:41:40.811305 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-05-29 00:41:40.816818 | orchestrator | 2025-05-29 00:41:40.816910 | orchestrator | TASK [Wipe partitions with wipefs] ********************************************* 2025-05-29 00:41:40.817917 | orchestrator | Thursday 29 May 2025 00:41:40 +0000 (0:00:01.210) 0:00:03.170 ********** 2025-05-29 00:41:42.225251 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdb) 2025-05-29 00:41:42.225625 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdb) 2025-05-29 00:41:42.225889 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdb) 2025-05-29 00:41:42.226204 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdc) 2025-05-29 00:41:42.226997 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdc) 2025-05-29 00:41:42.227019 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdc) 2025-05-29 00:41:42.227298 | orchestrator | ok: [testbed-node-3] => (item=/dev/sdd) 2025-05-29 00:41:42.227567 | orchestrator | ok: [testbed-node-4] => (item=/dev/sdd) 2025-05-29 00:41:42.227884 | orchestrator | ok: [testbed-node-5] => (item=/dev/sdd) 2025-05-29 00:41:42.228182 | orchestrator | 2025-05-29 00:41:42.228564 | orchestrator | TASK [Overwrite first 32M with zeros] ****************************************** 2025-05-29 00:41:42.228900 | orchestrator | Thursday 29 May 2025 00:41:42 +0000 (0:00:01.425) 0:00:04.596 ********** 2025-05-29 00:41:44.423209 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdb) 2025-05-29 00:41:44.423321 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdb) 2025-05-29 00:41:44.423747 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdb) 2025-05-29 00:41:44.426112 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdc) 2025-05-29 00:41:44.426135 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdc) 2025-05-29 00:41:44.426146 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdc) 2025-05-29 00:41:44.426158 | orchestrator | changed: [testbed-node-3] => (item=/dev/sdd) 2025-05-29 00:41:44.426169 | orchestrator | changed: [testbed-node-4] => (item=/dev/sdd) 2025-05-29 00:41:44.426180 | orchestrator | changed: [testbed-node-5] => (item=/dev/sdd) 2025-05-29 00:41:44.426192 | orchestrator | 2025-05-29 00:41:44.426205 | orchestrator | TASK [Reload udev rules] ******************************************************* 2025-05-29 00:41:44.426218 | orchestrator | Thursday 29 May 2025 00:41:44 +0000 (0:00:02.194) 0:00:06.790 ********** 2025-05-29 00:41:45.001072 | orchestrator | changed: [testbed-node-3] 2025-05-29 00:41:45.001177 | orchestrator | changed: [testbed-node-4] 2025-05-29 00:41:45.003737 | orchestrator | changed: [testbed-node-5] 2025-05-29 00:41:45.003814 | orchestrator | 2025-05-29 00:41:45.003879 | orchestrator | TASK [Request device events from the kernel] *********************************** 2025-05-29 00:41:45.004256 | orchestrator | Thursday 29 May 2025 00:41:44 +0000 (0:00:00.582) 0:00:07.372 ********** 2025-05-29 00:41:45.650845 | orchestrator | changed: [testbed-node-3] 2025-05-29 00:41:45.651852 | orchestrator | changed: [testbed-node-4] 2025-05-29 00:41:45.652953 | orchestrator | changed: [testbed-node-5] 2025-05-29 00:41:45.653083 | orchestrator | 2025-05-29 00:41:45.654402 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-29 00:41:45.654644 | orchestrator | 2025-05-29 00:41:45 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-29 00:41:45.654912 | orchestrator | 2025-05-29 00:41:45 | INFO  | Please wait and do not abort execution. 2025-05-29 00:41:45.655422 | orchestrator | testbed-node-3 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-29 00:41:45.656231 | orchestrator | testbed-node-4 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-29 00:41:45.657941 | orchestrator | testbed-node-5 : ok=7  changed=5  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-29 00:41:45.658005 | orchestrator | 2025-05-29 00:41:45.658159 | orchestrator | Thursday 29 May 2025 00:41:45 +0000 (0:00:00.648) 0:00:08.021 ********** 2025-05-29 00:41:45.658636 | orchestrator | =============================================================================== 2025-05-29 00:41:45.659149 | orchestrator | Overwrite first 32M with zeros ------------------------------------------ 2.19s 2025-05-29 00:41:45.659684 | orchestrator | Wipe partitions with wipefs --------------------------------------------- 1.43s 2025-05-29 00:41:45.660618 | orchestrator | Check device availability ----------------------------------------------- 1.21s 2025-05-29 00:41:45.663644 | orchestrator | Find all logical devices with prefix ceph ------------------------------- 0.71s 2025-05-29 00:41:45.664255 | orchestrator | Request device events from the kernel ----------------------------------- 0.65s 2025-05-29 00:41:45.664404 | orchestrator | Find all logical devices owned by UID 167 ------------------------------- 0.60s 2025-05-29 00:41:45.665071 | orchestrator | Reload udev rules ------------------------------------------------------- 0.58s 2025-05-29 00:41:45.665590 | orchestrator | Remove all ceph related logical devices --------------------------------- 0.28s 2025-05-29 00:41:45.666145 | orchestrator | Remove all rook related logical devices --------------------------------- 0.24s 2025-05-29 00:41:47.763981 | orchestrator | 2025-05-29 00:41:47 | INFO  | Task 499afad0-5358-4341-8064-9cf9289f3451 (facts) was prepared for execution. 2025-05-29 00:41:47.764084 | orchestrator | 2025-05-29 00:41:47 | INFO  | It takes a moment until task 499afad0-5358-4341-8064-9cf9289f3451 (facts) has been started and output is visible here. 2025-05-29 00:41:50.980096 | orchestrator | 2025-05-29 00:41:50.980576 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-05-29 00:41:50.981204 | orchestrator | 2025-05-29 00:41:50.982636 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-05-29 00:41:50.983683 | orchestrator | Thursday 29 May 2025 00:41:50 +0000 (0:00:00.217) 0:00:00.217 ********** 2025-05-29 00:41:52.171404 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:41:52.172669 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:41:52.173840 | orchestrator | ok: [testbed-manager] 2025-05-29 00:41:52.174801 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:41:52.175286 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:41:52.178203 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:41:52.178239 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:41:52.178272 | orchestrator | 2025-05-29 00:41:52.178495 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-05-29 00:41:52.179231 | orchestrator | Thursday 29 May 2025 00:41:52 +0000 (0:00:01.191) 0:00:01.408 ********** 2025-05-29 00:41:52.365088 | orchestrator | skipping: [testbed-manager] 2025-05-29 00:41:52.470408 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:41:52.568299 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:41:52.662948 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:41:52.758699 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:41:53.675536 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:41:53.675791 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:41:53.676311 | orchestrator | 2025-05-29 00:41:53.677422 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-05-29 00:41:53.680968 | orchestrator | 2025-05-29 00:41:53.682419 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-05-29 00:41:53.683858 | orchestrator | Thursday 29 May 2025 00:41:53 +0000 (0:00:01.505) 0:00:02.914 ********** 2025-05-29 00:41:58.514852 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:41:58.521198 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:41:58.521257 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:41:58.521271 | orchestrator | ok: [testbed-manager] 2025-05-29 00:41:58.521667 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:41:58.522832 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:41:58.526127 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:41:58.526149 | orchestrator | 2025-05-29 00:41:58.526163 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-05-29 00:41:58.526176 | orchestrator | 2025-05-29 00:41:58.526187 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-05-29 00:41:58.526871 | orchestrator | Thursday 29 May 2025 00:41:58 +0000 (0:00:04.835) 0:00:07.750 ********** 2025-05-29 00:41:58.986106 | orchestrator | skipping: [testbed-manager] 2025-05-29 00:41:59.079167 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:41:59.176513 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:41:59.275722 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:41:59.387211 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:41:59.432181 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:41:59.432669 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:41:59.433707 | orchestrator | 2025-05-29 00:41:59.434597 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-29 00:41:59.435298 | orchestrator | 2025-05-29 00:41:59 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-29 00:41:59.435323 | orchestrator | 2025-05-29 00:41:59 | INFO  | Please wait and do not abort execution. 2025-05-29 00:41:59.436616 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-29 00:41:59.437262 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-29 00:41:59.438407 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-29 00:41:59.439108 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-29 00:41:59.440037 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-29 00:41:59.440926 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-29 00:41:59.441591 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-29 00:41:59.442573 | orchestrator | 2025-05-29 00:41:59.443183 | orchestrator | Thursday 29 May 2025 00:41:59 +0000 (0:00:00.923) 0:00:08.674 ********** 2025-05-29 00:41:59.443212 | orchestrator | =============================================================================== 2025-05-29 00:41:59.443584 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.84s 2025-05-29 00:41:59.444018 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.51s 2025-05-29 00:41:59.444519 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.19s 2025-05-29 00:41:59.446412 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.92s 2025-05-29 00:42:01.693984 | orchestrator | 2025-05-29 00:42:01 | INFO  | Task 285f1578-055a-4f31-9a6a-3f65bf3caf30 (ceph-configure-lvm-volumes) was prepared for execution. 2025-05-29 00:42:01.694142 | orchestrator | 2025-05-29 00:42:01 | INFO  | It takes a moment until task 285f1578-055a-4f31-9a6a-3f65bf3caf30 (ceph-configure-lvm-volumes) has been started and output is visible here. 2025-05-29 00:42:05.104129 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.12 2025-05-29 00:42:05.727365 | orchestrator | 2025-05-29 00:42:05.727527 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-05-29 00:42:05.728159 | orchestrator | 2025-05-29 00:42:05.728260 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-05-29 00:42:05.728668 | orchestrator | Thursday 29 May 2025 00:42:05 +0000 (0:00:00.528) 0:00:00.528 ********** 2025-05-29 00:42:06.011030 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-05-29 00:42:06.011559 | orchestrator | 2025-05-29 00:42:06.011816 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-05-29 00:42:06.013494 | orchestrator | Thursday 29 May 2025 00:42:06 +0000 (0:00:00.285) 0:00:00.814 ********** 2025-05-29 00:42:06.264542 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:42:06.267974 | orchestrator | 2025-05-29 00:42:06.268215 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-29 00:42:06.268786 | orchestrator | Thursday 29 May 2025 00:42:06 +0000 (0:00:00.252) 0:00:01.066 ********** 2025-05-29 00:42:06.811213 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-05-29 00:42:06.812134 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-05-29 00:42:06.814535 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-05-29 00:42:06.814734 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-05-29 00:42:06.815157 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-05-29 00:42:06.815228 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-05-29 00:42:06.816100 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-05-29 00:42:06.816168 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-05-29 00:42:06.816509 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-05-29 00:42:06.818482 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-05-29 00:42:06.818581 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-05-29 00:42:06.818909 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-05-29 00:42:06.820393 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-05-29 00:42:06.820542 | orchestrator | 2025-05-29 00:42:06.820871 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-29 00:42:06.822347 | orchestrator | Thursday 29 May 2025 00:42:06 +0000 (0:00:00.545) 0:00:01.612 ********** 2025-05-29 00:42:07.047568 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:42:07.048549 | orchestrator | 2025-05-29 00:42:07.049082 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-29 00:42:07.051343 | orchestrator | Thursday 29 May 2025 00:42:07 +0000 (0:00:00.232) 0:00:01.845 ********** 2025-05-29 00:42:07.255309 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:42:07.255449 | orchestrator | 2025-05-29 00:42:07.255526 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-29 00:42:07.257395 | orchestrator | Thursday 29 May 2025 00:42:07 +0000 (0:00:00.212) 0:00:02.057 ********** 2025-05-29 00:42:07.487617 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:42:07.487741 | orchestrator | 2025-05-29 00:42:07.487891 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-29 00:42:07.489789 | orchestrator | Thursday 29 May 2025 00:42:07 +0000 (0:00:00.227) 0:00:02.284 ********** 2025-05-29 00:42:07.712601 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:42:07.713175 | orchestrator | 2025-05-29 00:42:07.713210 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-29 00:42:07.713558 | orchestrator | Thursday 29 May 2025 00:42:07 +0000 (0:00:00.228) 0:00:02.513 ********** 2025-05-29 00:42:07.937023 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:42:07.937132 | orchestrator | 2025-05-29 00:42:07.937157 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-29 00:42:07.937208 | orchestrator | Thursday 29 May 2025 00:42:07 +0000 (0:00:00.223) 0:00:02.737 ********** 2025-05-29 00:42:08.145049 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:42:08.145161 | orchestrator | 2025-05-29 00:42:08.145244 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-29 00:42:08.147524 | orchestrator | Thursday 29 May 2025 00:42:08 +0000 (0:00:00.211) 0:00:02.948 ********** 2025-05-29 00:42:08.375152 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:42:08.375370 | orchestrator | 2025-05-29 00:42:08.375393 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-29 00:42:08.375879 | orchestrator | Thursday 29 May 2025 00:42:08 +0000 (0:00:00.232) 0:00:03.180 ********** 2025-05-29 00:42:08.578174 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:42:08.578273 | orchestrator | 2025-05-29 00:42:08.578288 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-29 00:42:08.578454 | orchestrator | Thursday 29 May 2025 00:42:08 +0000 (0:00:00.201) 0:00:03.382 ********** 2025-05-29 00:42:09.271119 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_3e9d3af7-34b1-4fa5-b4a2-fbeb047fa155) 2025-05-29 00:42:09.271226 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_3e9d3af7-34b1-4fa5-b4a2-fbeb047fa155) 2025-05-29 00:42:09.274832 | orchestrator | 2025-05-29 00:42:09.274931 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-29 00:42:09.275458 | orchestrator | Thursday 29 May 2025 00:42:09 +0000 (0:00:00.692) 0:00:04.075 ********** 2025-05-29 00:42:10.023620 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_172ad3b6-4b22-4cdf-a28e-ac5da2182fda) 2025-05-29 00:42:10.023720 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_172ad3b6-4b22-4cdf-a28e-ac5da2182fda) 2025-05-29 00:42:10.023735 | orchestrator | 2025-05-29 00:42:10.023747 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-29 00:42:10.023759 | orchestrator | Thursday 29 May 2025 00:42:10 +0000 (0:00:00.749) 0:00:04.825 ********** 2025-05-29 00:42:10.440819 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_81c2fe1f-38cc-49f7-ae7d-3d898626253d) 2025-05-29 00:42:10.440923 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_81c2fe1f-38cc-49f7-ae7d-3d898626253d) 2025-05-29 00:42:10.442652 | orchestrator | 2025-05-29 00:42:10.442841 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-29 00:42:10.443150 | orchestrator | Thursday 29 May 2025 00:42:10 +0000 (0:00:00.421) 0:00:05.246 ********** 2025-05-29 00:42:10.828853 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_872f8c6a-38b8-4598-af69-d174e2488207) 2025-05-29 00:42:10.829151 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_872f8c6a-38b8-4598-af69-d174e2488207) 2025-05-29 00:42:10.829556 | orchestrator | 2025-05-29 00:42:10.831779 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-29 00:42:10.832135 | orchestrator | Thursday 29 May 2025 00:42:10 +0000 (0:00:00.386) 0:00:05.633 ********** 2025-05-29 00:42:11.129792 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-05-29 00:42:11.131733 | orchestrator | 2025-05-29 00:42:11.132170 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-29 00:42:11.132955 | orchestrator | Thursday 29 May 2025 00:42:11 +0000 (0:00:00.302) 0:00:05.936 ********** 2025-05-29 00:42:11.548123 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-05-29 00:42:11.548318 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-05-29 00:42:11.548730 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-05-29 00:42:11.549308 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-05-29 00:42:11.550291 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-05-29 00:42:11.550631 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-05-29 00:42:11.550915 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-05-29 00:42:11.552844 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-05-29 00:42:11.552902 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-05-29 00:42:11.553557 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-05-29 00:42:11.553918 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-05-29 00:42:11.553939 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-05-29 00:42:11.557417 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-05-29 00:42:11.557451 | orchestrator | 2025-05-29 00:42:11.557471 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-29 00:42:11.557482 | orchestrator | Thursday 29 May 2025 00:42:11 +0000 (0:00:00.414) 0:00:06.350 ********** 2025-05-29 00:42:11.755481 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:42:11.755947 | orchestrator | 2025-05-29 00:42:11.756547 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-29 00:42:11.759441 | orchestrator | Thursday 29 May 2025 00:42:11 +0000 (0:00:00.208) 0:00:06.558 ********** 2025-05-29 00:42:11.952122 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:42:11.955716 | orchestrator | 2025-05-29 00:42:11.955937 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-29 00:42:11.956590 | orchestrator | Thursday 29 May 2025 00:42:11 +0000 (0:00:00.196) 0:00:06.755 ********** 2025-05-29 00:42:12.176945 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:42:12.177072 | orchestrator | 2025-05-29 00:42:12.177255 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-29 00:42:12.177289 | orchestrator | Thursday 29 May 2025 00:42:12 +0000 (0:00:00.225) 0:00:06.980 ********** 2025-05-29 00:42:12.388953 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:42:12.390187 | orchestrator | 2025-05-29 00:42:12.391242 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-29 00:42:12.392108 | orchestrator | Thursday 29 May 2025 00:42:12 +0000 (0:00:00.213) 0:00:07.194 ********** 2025-05-29 00:42:12.561585 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:42:12.561666 | orchestrator | 2025-05-29 00:42:12.561696 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-29 00:42:12.561706 | orchestrator | Thursday 29 May 2025 00:42:12 +0000 (0:00:00.171) 0:00:07.365 ********** 2025-05-29 00:42:12.912363 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:42:12.912668 | orchestrator | 2025-05-29 00:42:12.913862 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-29 00:42:12.914519 | orchestrator | Thursday 29 May 2025 00:42:12 +0000 (0:00:00.352) 0:00:07.717 ********** 2025-05-29 00:42:13.113428 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:42:13.116369 | orchestrator | 2025-05-29 00:42:13.116444 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-29 00:42:13.116459 | orchestrator | Thursday 29 May 2025 00:42:13 +0000 (0:00:00.198) 0:00:07.916 ********** 2025-05-29 00:42:13.304682 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:42:13.304840 | orchestrator | 2025-05-29 00:42:13.307469 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-29 00:42:13.308345 | orchestrator | Thursday 29 May 2025 00:42:13 +0000 (0:00:00.191) 0:00:08.108 ********** 2025-05-29 00:42:13.927574 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-05-29 00:42:13.927664 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-05-29 00:42:13.927719 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-05-29 00:42:13.927816 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-05-29 00:42:13.928725 | orchestrator | 2025-05-29 00:42:13.930639 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-29 00:42:13.930838 | orchestrator | Thursday 29 May 2025 00:42:13 +0000 (0:00:00.624) 0:00:08.732 ********** 2025-05-29 00:42:14.112400 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:42:14.113982 | orchestrator | 2025-05-29 00:42:14.118821 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-29 00:42:14.119257 | orchestrator | Thursday 29 May 2025 00:42:14 +0000 (0:00:00.184) 0:00:08.917 ********** 2025-05-29 00:42:14.302616 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:42:14.302975 | orchestrator | 2025-05-29 00:42:14.304133 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-29 00:42:14.304584 | orchestrator | Thursday 29 May 2025 00:42:14 +0000 (0:00:00.191) 0:00:09.108 ********** 2025-05-29 00:42:14.513495 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:42:14.515933 | orchestrator | 2025-05-29 00:42:14.516566 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-29 00:42:14.516595 | orchestrator | Thursday 29 May 2025 00:42:14 +0000 (0:00:00.209) 0:00:09.318 ********** 2025-05-29 00:42:14.702245 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:42:14.702592 | orchestrator | 2025-05-29 00:42:14.703725 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-05-29 00:42:14.707611 | orchestrator | Thursday 29 May 2025 00:42:14 +0000 (0:00:00.186) 0:00:09.505 ********** 2025-05-29 00:42:14.856794 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': None}) 2025-05-29 00:42:14.856933 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': None}) 2025-05-29 00:42:14.857026 | orchestrator | 2025-05-29 00:42:14.857486 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-05-29 00:42:14.857795 | orchestrator | Thursday 29 May 2025 00:42:14 +0000 (0:00:00.153) 0:00:09.658 ********** 2025-05-29 00:42:14.973706 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:42:14.973784 | orchestrator | 2025-05-29 00:42:14.974633 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-05-29 00:42:14.976525 | orchestrator | Thursday 29 May 2025 00:42:14 +0000 (0:00:00.120) 0:00:09.779 ********** 2025-05-29 00:42:15.235638 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:42:15.235725 | orchestrator | 2025-05-29 00:42:15.235972 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-05-29 00:42:15.236757 | orchestrator | Thursday 29 May 2025 00:42:15 +0000 (0:00:00.260) 0:00:10.040 ********** 2025-05-29 00:42:15.349744 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:42:15.350367 | orchestrator | 2025-05-29 00:42:15.351347 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-05-29 00:42:15.352661 | orchestrator | Thursday 29 May 2025 00:42:15 +0000 (0:00:00.109) 0:00:10.149 ********** 2025-05-29 00:42:15.512690 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:42:15.512824 | orchestrator | 2025-05-29 00:42:15.513440 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-05-29 00:42:15.513913 | orchestrator | Thursday 29 May 2025 00:42:15 +0000 (0:00:00.163) 0:00:10.313 ********** 2025-05-29 00:42:15.678101 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'b02a0e5a-ac94-54a1-88a1-38ba26e145f6'}}) 2025-05-29 00:42:15.678181 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '81bd5020-0460-5411-80bb-35101e63cce8'}}) 2025-05-29 00:42:15.678637 | orchestrator | 2025-05-29 00:42:15.679224 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-05-29 00:42:15.679677 | orchestrator | Thursday 29 May 2025 00:42:15 +0000 (0:00:00.170) 0:00:10.483 ********** 2025-05-29 00:42:15.836449 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'b02a0e5a-ac94-54a1-88a1-38ba26e145f6'}})  2025-05-29 00:42:15.836541 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '81bd5020-0460-5411-80bb-35101e63cce8'}})  2025-05-29 00:42:15.836892 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:42:15.837673 | orchestrator | 2025-05-29 00:42:15.838134 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-05-29 00:42:15.838934 | orchestrator | Thursday 29 May 2025 00:42:15 +0000 (0:00:00.157) 0:00:10.641 ********** 2025-05-29 00:42:15.997628 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'b02a0e5a-ac94-54a1-88a1-38ba26e145f6'}})  2025-05-29 00:42:16.002460 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '81bd5020-0460-5411-80bb-35101e63cce8'}})  2025-05-29 00:42:16.005775 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:42:16.006958 | orchestrator | 2025-05-29 00:42:16.007598 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-05-29 00:42:16.008231 | orchestrator | Thursday 29 May 2025 00:42:15 +0000 (0:00:00.155) 0:00:10.797 ********** 2025-05-29 00:42:16.136446 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'b02a0e5a-ac94-54a1-88a1-38ba26e145f6'}})  2025-05-29 00:42:16.137574 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '81bd5020-0460-5411-80bb-35101e63cce8'}})  2025-05-29 00:42:16.139713 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:42:16.140458 | orchestrator | 2025-05-29 00:42:16.140821 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-05-29 00:42:16.141360 | orchestrator | Thursday 29 May 2025 00:42:16 +0000 (0:00:00.141) 0:00:10.939 ********** 2025-05-29 00:42:16.279231 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:42:16.279307 | orchestrator | 2025-05-29 00:42:16.279562 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-05-29 00:42:16.279989 | orchestrator | Thursday 29 May 2025 00:42:16 +0000 (0:00:00.140) 0:00:11.079 ********** 2025-05-29 00:42:16.426978 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:42:16.428629 | orchestrator | 2025-05-29 00:42:16.428656 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-05-29 00:42:16.429223 | orchestrator | Thursday 29 May 2025 00:42:16 +0000 (0:00:00.146) 0:00:11.226 ********** 2025-05-29 00:42:16.555101 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:42:16.555284 | orchestrator | 2025-05-29 00:42:16.555931 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-05-29 00:42:16.556320 | orchestrator | Thursday 29 May 2025 00:42:16 +0000 (0:00:00.133) 0:00:11.360 ********** 2025-05-29 00:42:16.678115 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:42:16.679338 | orchestrator | 2025-05-29 00:42:16.680782 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-05-29 00:42:16.682206 | orchestrator | Thursday 29 May 2025 00:42:16 +0000 (0:00:00.121) 0:00:11.481 ********** 2025-05-29 00:42:16.811535 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:42:16.811731 | orchestrator | 2025-05-29 00:42:16.812345 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-05-29 00:42:16.812714 | orchestrator | Thursday 29 May 2025 00:42:16 +0000 (0:00:00.135) 0:00:11.617 ********** 2025-05-29 00:42:17.079059 | orchestrator | ok: [testbed-node-3] => { 2025-05-29 00:42:17.081272 | orchestrator |  "ceph_osd_devices": { 2025-05-29 00:42:17.082282 | orchestrator |  "sdb": { 2025-05-29 00:42:17.083912 | orchestrator |  "osd_lvm_uuid": "b02a0e5a-ac94-54a1-88a1-38ba26e145f6" 2025-05-29 00:42:17.085260 | orchestrator |  }, 2025-05-29 00:42:17.086818 | orchestrator |  "sdc": { 2025-05-29 00:42:17.088178 | orchestrator |  "osd_lvm_uuid": "81bd5020-0460-5411-80bb-35101e63cce8" 2025-05-29 00:42:17.088962 | orchestrator |  } 2025-05-29 00:42:17.089364 | orchestrator |  } 2025-05-29 00:42:17.090300 | orchestrator | } 2025-05-29 00:42:17.090978 | orchestrator | 2025-05-29 00:42:17.091830 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-05-29 00:42:17.091903 | orchestrator | Thursday 29 May 2025 00:42:17 +0000 (0:00:00.266) 0:00:11.883 ********** 2025-05-29 00:42:17.214158 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:42:17.214236 | orchestrator | 2025-05-29 00:42:17.214465 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-05-29 00:42:17.215174 | orchestrator | Thursday 29 May 2025 00:42:17 +0000 (0:00:00.134) 0:00:12.018 ********** 2025-05-29 00:42:17.346668 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:42:17.346769 | orchestrator | 2025-05-29 00:42:17.346785 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-05-29 00:42:17.346798 | orchestrator | Thursday 29 May 2025 00:42:17 +0000 (0:00:00.132) 0:00:12.150 ********** 2025-05-29 00:42:17.469868 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:42:17.470846 | orchestrator | 2025-05-29 00:42:17.470887 | orchestrator | TASK [Print configuration data] ************************************************ 2025-05-29 00:42:17.471787 | orchestrator | Thursday 29 May 2025 00:42:17 +0000 (0:00:00.125) 0:00:12.275 ********** 2025-05-29 00:42:17.746704 | orchestrator | changed: [testbed-node-3] => { 2025-05-29 00:42:17.748347 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-05-29 00:42:17.749756 | orchestrator |  "ceph_osd_devices": { 2025-05-29 00:42:17.751990 | orchestrator |  "sdb": { 2025-05-29 00:42:17.753050 | orchestrator |  "osd_lvm_uuid": "b02a0e5a-ac94-54a1-88a1-38ba26e145f6" 2025-05-29 00:42:17.755345 | orchestrator |  }, 2025-05-29 00:42:17.755582 | orchestrator |  "sdc": { 2025-05-29 00:42:17.758565 | orchestrator |  "osd_lvm_uuid": "81bd5020-0460-5411-80bb-35101e63cce8" 2025-05-29 00:42:17.758842 | orchestrator |  } 2025-05-29 00:42:17.759153 | orchestrator |  }, 2025-05-29 00:42:17.760199 | orchestrator |  "lvm_volumes": [ 2025-05-29 00:42:17.760575 | orchestrator |  { 2025-05-29 00:42:17.760682 | orchestrator |  "data": "osd-block-b02a0e5a-ac94-54a1-88a1-38ba26e145f6", 2025-05-29 00:42:17.761119 | orchestrator |  "data_vg": "ceph-b02a0e5a-ac94-54a1-88a1-38ba26e145f6" 2025-05-29 00:42:17.761563 | orchestrator |  }, 2025-05-29 00:42:17.761982 | orchestrator |  { 2025-05-29 00:42:17.762175 | orchestrator |  "data": "osd-block-81bd5020-0460-5411-80bb-35101e63cce8", 2025-05-29 00:42:17.762884 | orchestrator |  "data_vg": "ceph-81bd5020-0460-5411-80bb-35101e63cce8" 2025-05-29 00:42:17.763249 | orchestrator |  } 2025-05-29 00:42:17.763617 | orchestrator |  ] 2025-05-29 00:42:17.764122 | orchestrator |  } 2025-05-29 00:42:17.764448 | orchestrator | } 2025-05-29 00:42:17.765055 | orchestrator | 2025-05-29 00:42:17.765513 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-05-29 00:42:17.765987 | orchestrator | Thursday 29 May 2025 00:42:17 +0000 (0:00:00.276) 0:00:12.552 ********** 2025-05-29 00:42:19.746639 | orchestrator | changed: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-05-29 00:42:19.746767 | orchestrator | 2025-05-29 00:42:19.746847 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-05-29 00:42:19.747276 | orchestrator | 2025-05-29 00:42:19.747298 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-05-29 00:42:19.747310 | orchestrator | Thursday 29 May 2025 00:42:19 +0000 (0:00:02.000) 0:00:14.552 ********** 2025-05-29 00:42:19.992796 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-05-29 00:42:19.992932 | orchestrator | 2025-05-29 00:42:19.993012 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-05-29 00:42:19.993029 | orchestrator | Thursday 29 May 2025 00:42:19 +0000 (0:00:00.243) 0:00:14.796 ********** 2025-05-29 00:42:20.203281 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:42:20.203634 | orchestrator | 2025-05-29 00:42:20.203671 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-29 00:42:20.203692 | orchestrator | Thursday 29 May 2025 00:42:20 +0000 (0:00:00.209) 0:00:15.005 ********** 2025-05-29 00:42:20.563834 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-05-29 00:42:20.566099 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-05-29 00:42:20.566139 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-05-29 00:42:20.566151 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-05-29 00:42:20.566163 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-05-29 00:42:20.566174 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-05-29 00:42:20.566186 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-05-29 00:42:20.566452 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-05-29 00:42:20.566769 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-05-29 00:42:20.567183 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-05-29 00:42:20.567905 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-05-29 00:42:20.568429 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-05-29 00:42:20.568698 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-05-29 00:42:20.569514 | orchestrator | 2025-05-29 00:42:20.569927 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-29 00:42:20.570480 | orchestrator | Thursday 29 May 2025 00:42:20 +0000 (0:00:00.362) 0:00:15.367 ********** 2025-05-29 00:42:20.791968 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:42:20.793447 | orchestrator | 2025-05-29 00:42:20.798100 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-29 00:42:20.798188 | orchestrator | Thursday 29 May 2025 00:42:20 +0000 (0:00:00.227) 0:00:15.595 ********** 2025-05-29 00:42:21.013011 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:42:21.013109 | orchestrator | 2025-05-29 00:42:21.013186 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-29 00:42:21.013624 | orchestrator | Thursday 29 May 2025 00:42:21 +0000 (0:00:00.218) 0:00:15.814 ********** 2025-05-29 00:42:21.231781 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:42:21.234467 | orchestrator | 2025-05-29 00:42:21.234553 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-29 00:42:21.234644 | orchestrator | Thursday 29 May 2025 00:42:21 +0000 (0:00:00.221) 0:00:16.035 ********** 2025-05-29 00:42:21.558618 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:42:21.558743 | orchestrator | 2025-05-29 00:42:21.558758 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-29 00:42:21.560163 | orchestrator | Thursday 29 May 2025 00:42:21 +0000 (0:00:00.319) 0:00:16.355 ********** 2025-05-29 00:42:22.280157 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:42:22.285291 | orchestrator | 2025-05-29 00:42:22.286363 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-29 00:42:22.288816 | orchestrator | Thursday 29 May 2025 00:42:22 +0000 (0:00:00.728) 0:00:17.084 ********** 2025-05-29 00:42:22.491580 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:42:22.492241 | orchestrator | 2025-05-29 00:42:22.494098 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-29 00:42:22.496107 | orchestrator | Thursday 29 May 2025 00:42:22 +0000 (0:00:00.209) 0:00:17.294 ********** 2025-05-29 00:42:22.690853 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:42:22.694530 | orchestrator | 2025-05-29 00:42:22.694566 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-29 00:42:22.694581 | orchestrator | Thursday 29 May 2025 00:42:22 +0000 (0:00:00.197) 0:00:17.491 ********** 2025-05-29 00:42:22.887559 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:42:22.887663 | orchestrator | 2025-05-29 00:42:22.888227 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-29 00:42:22.889151 | orchestrator | Thursday 29 May 2025 00:42:22 +0000 (0:00:00.201) 0:00:17.692 ********** 2025-05-29 00:42:23.298344 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_c7ad4de3-4f57-4eb1-a9f0-bec4cfb4ae61) 2025-05-29 00:42:23.299319 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_c7ad4de3-4f57-4eb1-a9f0-bec4cfb4ae61) 2025-05-29 00:42:23.300168 | orchestrator | 2025-05-29 00:42:23.301518 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-29 00:42:23.302197 | orchestrator | Thursday 29 May 2025 00:42:23 +0000 (0:00:00.409) 0:00:18.101 ********** 2025-05-29 00:42:23.728037 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_d4d6d7dc-ffab-40f4-8a14-6defed4afc9f) 2025-05-29 00:42:23.728138 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_d4d6d7dc-ffab-40f4-8a14-6defed4afc9f) 2025-05-29 00:42:23.729708 | orchestrator | 2025-05-29 00:42:23.730815 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-29 00:42:23.731931 | orchestrator | Thursday 29 May 2025 00:42:23 +0000 (0:00:00.429) 0:00:18.531 ********** 2025-05-29 00:42:24.168821 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_ab52b3eb-0fd7-41fe-9d4d-bdc516081274) 2025-05-29 00:42:24.170264 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_ab52b3eb-0fd7-41fe-9d4d-bdc516081274) 2025-05-29 00:42:24.171896 | orchestrator | 2025-05-29 00:42:24.174116 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-29 00:42:24.177428 | orchestrator | Thursday 29 May 2025 00:42:24 +0000 (0:00:00.441) 0:00:18.972 ********** 2025-05-29 00:42:24.593150 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_f7dbb189-5858-4eca-9499-fceb9ae8f8d2) 2025-05-29 00:42:24.593818 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_f7dbb189-5858-4eca-9499-fceb9ae8f8d2) 2025-05-29 00:42:24.593992 | orchestrator | 2025-05-29 00:42:24.594257 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-29 00:42:24.594978 | orchestrator | Thursday 29 May 2025 00:42:24 +0000 (0:00:00.424) 0:00:19.397 ********** 2025-05-29 00:42:24.934627 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-05-29 00:42:24.935845 | orchestrator | 2025-05-29 00:42:24.936940 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-29 00:42:24.937300 | orchestrator | Thursday 29 May 2025 00:42:24 +0000 (0:00:00.341) 0:00:19.738 ********** 2025-05-29 00:42:25.594426 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-05-29 00:42:25.594941 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-05-29 00:42:25.596957 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-05-29 00:42:25.598115 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-05-29 00:42:25.598843 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-05-29 00:42:25.599861 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-05-29 00:42:25.600826 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-05-29 00:42:25.601237 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-05-29 00:42:25.602789 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-05-29 00:42:25.603290 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-05-29 00:42:25.603574 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-05-29 00:42:25.604711 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-05-29 00:42:25.605141 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-05-29 00:42:25.605692 | orchestrator | 2025-05-29 00:42:25.606415 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-29 00:42:25.607047 | orchestrator | Thursday 29 May 2025 00:42:25 +0000 (0:00:00.658) 0:00:20.397 ********** 2025-05-29 00:42:25.829914 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:42:25.830430 | orchestrator | 2025-05-29 00:42:25.830637 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-29 00:42:25.831336 | orchestrator | Thursday 29 May 2025 00:42:25 +0000 (0:00:00.237) 0:00:20.634 ********** 2025-05-29 00:42:26.038750 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:42:26.038997 | orchestrator | 2025-05-29 00:42:26.039152 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-29 00:42:26.039733 | orchestrator | Thursday 29 May 2025 00:42:26 +0000 (0:00:00.208) 0:00:20.843 ********** 2025-05-29 00:42:26.239929 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:42:26.241459 | orchestrator | 2025-05-29 00:42:26.241895 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-29 00:42:26.243973 | orchestrator | Thursday 29 May 2025 00:42:26 +0000 (0:00:00.199) 0:00:21.043 ********** 2025-05-29 00:42:26.439989 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:42:26.440151 | orchestrator | 2025-05-29 00:42:26.440700 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-29 00:42:26.441414 | orchestrator | Thursday 29 May 2025 00:42:26 +0000 (0:00:00.202) 0:00:21.245 ********** 2025-05-29 00:42:26.654619 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:42:26.655208 | orchestrator | 2025-05-29 00:42:26.655730 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-29 00:42:26.656609 | orchestrator | Thursday 29 May 2025 00:42:26 +0000 (0:00:00.213) 0:00:21.458 ********** 2025-05-29 00:42:26.844731 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:42:26.844831 | orchestrator | 2025-05-29 00:42:26.845536 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-29 00:42:26.846231 | orchestrator | Thursday 29 May 2025 00:42:26 +0000 (0:00:00.190) 0:00:21.648 ********** 2025-05-29 00:42:27.059500 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:42:27.059747 | orchestrator | 2025-05-29 00:42:27.060552 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-29 00:42:27.061481 | orchestrator | Thursday 29 May 2025 00:42:27 +0000 (0:00:00.214) 0:00:21.863 ********** 2025-05-29 00:42:27.254470 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:42:27.254587 | orchestrator | 2025-05-29 00:42:27.255394 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-29 00:42:27.256048 | orchestrator | Thursday 29 May 2025 00:42:27 +0000 (0:00:00.194) 0:00:22.058 ********** 2025-05-29 00:42:28.105905 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-05-29 00:42:28.106084 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-05-29 00:42:28.106101 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-05-29 00:42:28.106114 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-05-29 00:42:28.106167 | orchestrator | 2025-05-29 00:42:28.106882 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-29 00:42:28.107119 | orchestrator | Thursday 29 May 2025 00:42:28 +0000 (0:00:00.843) 0:00:22.902 ********** 2025-05-29 00:42:28.827045 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:42:28.828230 | orchestrator | 2025-05-29 00:42:28.830777 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-29 00:42:28.830806 | orchestrator | Thursday 29 May 2025 00:42:28 +0000 (0:00:00.727) 0:00:23.629 ********** 2025-05-29 00:42:29.058461 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:42:29.059161 | orchestrator | 2025-05-29 00:42:29.060907 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-29 00:42:29.061064 | orchestrator | Thursday 29 May 2025 00:42:29 +0000 (0:00:00.233) 0:00:23.863 ********** 2025-05-29 00:42:29.271430 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:42:29.271758 | orchestrator | 2025-05-29 00:42:29.272525 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-29 00:42:29.273186 | orchestrator | Thursday 29 May 2025 00:42:29 +0000 (0:00:00.212) 0:00:24.075 ********** 2025-05-29 00:42:29.475004 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:42:29.475168 | orchestrator | 2025-05-29 00:42:29.476032 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-05-29 00:42:29.476828 | orchestrator | Thursday 29 May 2025 00:42:29 +0000 (0:00:00.202) 0:00:24.278 ********** 2025-05-29 00:42:29.670589 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': None}) 2025-05-29 00:42:29.670761 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': None}) 2025-05-29 00:42:29.671429 | orchestrator | 2025-05-29 00:42:29.672068 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-05-29 00:42:29.672868 | orchestrator | Thursday 29 May 2025 00:42:29 +0000 (0:00:00.195) 0:00:24.474 ********** 2025-05-29 00:42:29.812973 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:42:29.814240 | orchestrator | 2025-05-29 00:42:29.815059 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-05-29 00:42:29.816211 | orchestrator | Thursday 29 May 2025 00:42:29 +0000 (0:00:00.141) 0:00:24.615 ********** 2025-05-29 00:42:29.951094 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:42:29.951787 | orchestrator | 2025-05-29 00:42:29.952646 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-05-29 00:42:29.953168 | orchestrator | Thursday 29 May 2025 00:42:29 +0000 (0:00:00.140) 0:00:24.756 ********** 2025-05-29 00:42:30.095480 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:42:30.095960 | orchestrator | 2025-05-29 00:42:30.096874 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-05-29 00:42:30.098073 | orchestrator | Thursday 29 May 2025 00:42:30 +0000 (0:00:00.142) 0:00:24.899 ********** 2025-05-29 00:42:30.252282 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:42:30.252903 | orchestrator | 2025-05-29 00:42:30.254295 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-05-29 00:42:30.256584 | orchestrator | Thursday 29 May 2025 00:42:30 +0000 (0:00:00.157) 0:00:25.056 ********** 2025-05-29 00:42:30.433946 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '2961dba5-5d3e-5262-aab3-a8717ef28b96'}}) 2025-05-29 00:42:30.434640 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '10c8172d-d6a1-5b27-956e-8c5bc818fcb1'}}) 2025-05-29 00:42:30.435679 | orchestrator | 2025-05-29 00:42:30.436661 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-05-29 00:42:30.437859 | orchestrator | Thursday 29 May 2025 00:42:30 +0000 (0:00:00.181) 0:00:25.237 ********** 2025-05-29 00:42:30.601152 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '2961dba5-5d3e-5262-aab3-a8717ef28b96'}})  2025-05-29 00:42:30.602461 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '10c8172d-d6a1-5b27-956e-8c5bc818fcb1'}})  2025-05-29 00:42:30.603488 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:42:30.604299 | orchestrator | 2025-05-29 00:42:30.605043 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-05-29 00:42:30.606324 | orchestrator | Thursday 29 May 2025 00:42:30 +0000 (0:00:00.167) 0:00:25.405 ********** 2025-05-29 00:42:30.960501 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '2961dba5-5d3e-5262-aab3-a8717ef28b96'}})  2025-05-29 00:42:30.960595 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '10c8172d-d6a1-5b27-956e-8c5bc818fcb1'}})  2025-05-29 00:42:30.961283 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:42:30.961937 | orchestrator | 2025-05-29 00:42:30.963256 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-05-29 00:42:30.963949 | orchestrator | Thursday 29 May 2025 00:42:30 +0000 (0:00:00.355) 0:00:25.761 ********** 2025-05-29 00:42:31.111892 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '2961dba5-5d3e-5262-aab3-a8717ef28b96'}})  2025-05-29 00:42:31.111996 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '10c8172d-d6a1-5b27-956e-8c5bc818fcb1'}})  2025-05-29 00:42:31.112593 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:42:31.113407 | orchestrator | 2025-05-29 00:42:31.114152 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-05-29 00:42:31.114672 | orchestrator | Thursday 29 May 2025 00:42:31 +0000 (0:00:00.154) 0:00:25.915 ********** 2025-05-29 00:42:31.267428 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:42:31.267645 | orchestrator | 2025-05-29 00:42:31.268434 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-05-29 00:42:31.269924 | orchestrator | Thursday 29 May 2025 00:42:31 +0000 (0:00:00.156) 0:00:26.071 ********** 2025-05-29 00:42:31.422582 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:42:31.422771 | orchestrator | 2025-05-29 00:42:31.424200 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-05-29 00:42:31.424664 | orchestrator | Thursday 29 May 2025 00:42:31 +0000 (0:00:00.155) 0:00:26.226 ********** 2025-05-29 00:42:31.553002 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:42:31.553828 | orchestrator | 2025-05-29 00:42:31.554781 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-05-29 00:42:31.556214 | orchestrator | Thursday 29 May 2025 00:42:31 +0000 (0:00:00.129) 0:00:26.356 ********** 2025-05-29 00:42:31.695447 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:42:31.695568 | orchestrator | 2025-05-29 00:42:31.695591 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-05-29 00:42:31.696204 | orchestrator | Thursday 29 May 2025 00:42:31 +0000 (0:00:00.144) 0:00:26.500 ********** 2025-05-29 00:42:31.840223 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:42:31.841453 | orchestrator | 2025-05-29 00:42:31.842719 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-05-29 00:42:31.843135 | orchestrator | Thursday 29 May 2025 00:42:31 +0000 (0:00:00.144) 0:00:26.645 ********** 2025-05-29 00:42:31.992834 | orchestrator | ok: [testbed-node-4] => { 2025-05-29 00:42:31.993742 | orchestrator |  "ceph_osd_devices": { 2025-05-29 00:42:31.994687 | orchestrator |  "sdb": { 2025-05-29 00:42:31.999731 | orchestrator |  "osd_lvm_uuid": "2961dba5-5d3e-5262-aab3-a8717ef28b96" 2025-05-29 00:42:32.000988 | orchestrator |  }, 2025-05-29 00:42:32.001665 | orchestrator |  "sdc": { 2025-05-29 00:42:32.006650 | orchestrator |  "osd_lvm_uuid": "10c8172d-d6a1-5b27-956e-8c5bc818fcb1" 2025-05-29 00:42:32.006926 | orchestrator |  } 2025-05-29 00:42:32.007657 | orchestrator |  } 2025-05-29 00:42:32.009614 | orchestrator | } 2025-05-29 00:42:32.011085 | orchestrator | 2025-05-29 00:42:32.011462 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-05-29 00:42:32.011735 | orchestrator | Thursday 29 May 2025 00:42:31 +0000 (0:00:00.152) 0:00:26.797 ********** 2025-05-29 00:42:32.128754 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:42:32.129647 | orchestrator | 2025-05-29 00:42:32.131248 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-05-29 00:42:32.131981 | orchestrator | Thursday 29 May 2025 00:42:32 +0000 (0:00:00.136) 0:00:26.933 ********** 2025-05-29 00:42:32.266287 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:42:32.266514 | orchestrator | 2025-05-29 00:42:32.267200 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-05-29 00:42:32.267782 | orchestrator | Thursday 29 May 2025 00:42:32 +0000 (0:00:00.137) 0:00:27.071 ********** 2025-05-29 00:42:32.410935 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:42:32.411634 | orchestrator | 2025-05-29 00:42:32.412433 | orchestrator | TASK [Print configuration data] ************************************************ 2025-05-29 00:42:32.413203 | orchestrator | Thursday 29 May 2025 00:42:32 +0000 (0:00:00.144) 0:00:27.215 ********** 2025-05-29 00:42:32.863771 | orchestrator | changed: [testbed-node-4] => { 2025-05-29 00:42:32.863925 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-05-29 00:42:32.864884 | orchestrator |  "ceph_osd_devices": { 2025-05-29 00:42:32.866474 | orchestrator |  "sdb": { 2025-05-29 00:42:32.868318 | orchestrator |  "osd_lvm_uuid": "2961dba5-5d3e-5262-aab3-a8717ef28b96" 2025-05-29 00:42:32.868421 | orchestrator |  }, 2025-05-29 00:42:32.868435 | orchestrator |  "sdc": { 2025-05-29 00:42:32.868499 | orchestrator |  "osd_lvm_uuid": "10c8172d-d6a1-5b27-956e-8c5bc818fcb1" 2025-05-29 00:42:32.869103 | orchestrator |  } 2025-05-29 00:42:32.869733 | orchestrator |  }, 2025-05-29 00:42:32.870735 | orchestrator |  "lvm_volumes": [ 2025-05-29 00:42:32.870803 | orchestrator |  { 2025-05-29 00:42:32.871451 | orchestrator |  "data": "osd-block-2961dba5-5d3e-5262-aab3-a8717ef28b96", 2025-05-29 00:42:32.872029 | orchestrator |  "data_vg": "ceph-2961dba5-5d3e-5262-aab3-a8717ef28b96" 2025-05-29 00:42:32.872532 | orchestrator |  }, 2025-05-29 00:42:32.873148 | orchestrator |  { 2025-05-29 00:42:32.873748 | orchestrator |  "data": "osd-block-10c8172d-d6a1-5b27-956e-8c5bc818fcb1", 2025-05-29 00:42:32.874309 | orchestrator |  "data_vg": "ceph-10c8172d-d6a1-5b27-956e-8c5bc818fcb1" 2025-05-29 00:42:32.874814 | orchestrator |  } 2025-05-29 00:42:32.875423 | orchestrator |  ] 2025-05-29 00:42:32.875511 | orchestrator |  } 2025-05-29 00:42:32.875981 | orchestrator | } 2025-05-29 00:42:32.876247 | orchestrator | 2025-05-29 00:42:32.876698 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-05-29 00:42:32.876957 | orchestrator | Thursday 29 May 2025 00:42:32 +0000 (0:00:00.450) 0:00:27.665 ********** 2025-05-29 00:42:34.254544 | orchestrator | changed: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-05-29 00:42:34.255263 | orchestrator | 2025-05-29 00:42:34.256490 | orchestrator | PLAY [Ceph configure LVM] ****************************************************** 2025-05-29 00:42:34.256917 | orchestrator | 2025-05-29 00:42:34.257586 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-05-29 00:42:34.258212 | orchestrator | Thursday 29 May 2025 00:42:34 +0000 (0:00:01.391) 0:00:29.057 ********** 2025-05-29 00:42:34.484285 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-05-29 00:42:34.484415 | orchestrator | 2025-05-29 00:42:34.484991 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-05-29 00:42:34.485529 | orchestrator | Thursday 29 May 2025 00:42:34 +0000 (0:00:00.231) 0:00:29.288 ********** 2025-05-29 00:42:34.730907 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:42:34.731024 | orchestrator | 2025-05-29 00:42:34.731826 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-29 00:42:34.732289 | orchestrator | Thursday 29 May 2025 00:42:34 +0000 (0:00:00.246) 0:00:29.535 ********** 2025-05-29 00:42:35.490878 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-05-29 00:42:35.491092 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-05-29 00:42:35.492537 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-05-29 00:42:35.492805 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-05-29 00:42:35.495707 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-05-29 00:42:35.495737 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-05-29 00:42:35.495749 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-05-29 00:42:35.496918 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-05-29 00:42:35.497246 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-05-29 00:42:35.498558 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-05-29 00:42:35.499617 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-05-29 00:42:35.500536 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-05-29 00:42:35.501424 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-05-29 00:42:35.502705 | orchestrator | 2025-05-29 00:42:35.503373 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-29 00:42:35.504218 | orchestrator | Thursday 29 May 2025 00:42:35 +0000 (0:00:00.759) 0:00:30.294 ********** 2025-05-29 00:42:35.700196 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:42:35.700276 | orchestrator | 2025-05-29 00:42:35.700870 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-29 00:42:35.702599 | orchestrator | Thursday 29 May 2025 00:42:35 +0000 (0:00:00.207) 0:00:30.502 ********** 2025-05-29 00:42:35.909407 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:42:35.912510 | orchestrator | 2025-05-29 00:42:35.912555 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-29 00:42:35.912568 | orchestrator | Thursday 29 May 2025 00:42:35 +0000 (0:00:00.209) 0:00:30.711 ********** 2025-05-29 00:42:36.120824 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:42:36.121024 | orchestrator | 2025-05-29 00:42:36.122261 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-29 00:42:36.123137 | orchestrator | Thursday 29 May 2025 00:42:36 +0000 (0:00:00.209) 0:00:30.921 ********** 2025-05-29 00:42:36.356686 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:42:36.358090 | orchestrator | 2025-05-29 00:42:36.358727 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-29 00:42:36.359044 | orchestrator | Thursday 29 May 2025 00:42:36 +0000 (0:00:00.235) 0:00:31.157 ********** 2025-05-29 00:42:36.569449 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:42:36.570184 | orchestrator | 2025-05-29 00:42:36.570646 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-29 00:42:36.571580 | orchestrator | Thursday 29 May 2025 00:42:36 +0000 (0:00:00.214) 0:00:31.371 ********** 2025-05-29 00:42:36.801676 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:42:36.802523 | orchestrator | 2025-05-29 00:42:36.804769 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-29 00:42:36.806802 | orchestrator | Thursday 29 May 2025 00:42:36 +0000 (0:00:00.232) 0:00:31.604 ********** 2025-05-29 00:42:37.017651 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:42:37.017859 | orchestrator | 2025-05-29 00:42:37.018386 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-29 00:42:37.018758 | orchestrator | Thursday 29 May 2025 00:42:37 +0000 (0:00:00.217) 0:00:31.821 ********** 2025-05-29 00:42:37.233333 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:42:37.233466 | orchestrator | 2025-05-29 00:42:37.234484 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-29 00:42:37.235748 | orchestrator | Thursday 29 May 2025 00:42:37 +0000 (0:00:00.214) 0:00:32.036 ********** 2025-05-29 00:42:38.175740 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_13985d86-b513-49a7-ae6a-0b62fccaa428) 2025-05-29 00:42:38.177326 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_13985d86-b513-49a7-ae6a-0b62fccaa428) 2025-05-29 00:42:38.178471 | orchestrator | 2025-05-29 00:42:38.180528 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-29 00:42:38.180661 | orchestrator | Thursday 29 May 2025 00:42:38 +0000 (0:00:00.941) 0:00:32.977 ********** 2025-05-29 00:42:38.631919 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_baffed07-1ba6-4c69-bef3-fae49f76e29e) 2025-05-29 00:42:38.633284 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_baffed07-1ba6-4c69-bef3-fae49f76e29e) 2025-05-29 00:42:38.636105 | orchestrator | 2025-05-29 00:42:38.636175 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-29 00:42:38.636197 | orchestrator | Thursday 29 May 2025 00:42:38 +0000 (0:00:00.455) 0:00:33.433 ********** 2025-05-29 00:42:39.058133 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_6be5e360-5fe4-4176-98be-0e33dc067da2) 2025-05-29 00:42:39.058985 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_6be5e360-5fe4-4176-98be-0e33dc067da2) 2025-05-29 00:42:39.059702 | orchestrator | 2025-05-29 00:42:39.062499 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-29 00:42:39.062540 | orchestrator | Thursday 29 May 2025 00:42:39 +0000 (0:00:00.427) 0:00:33.861 ********** 2025-05-29 00:42:39.499478 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_c045ec7e-dfd2-45aa-a5da-e7ebbe64f976) 2025-05-29 00:42:39.503607 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_c045ec7e-dfd2-45aa-a5da-e7ebbe64f976) 2025-05-29 00:42:39.503658 | orchestrator | 2025-05-29 00:42:39.504461 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-29 00:42:39.506253 | orchestrator | Thursday 29 May 2025 00:42:39 +0000 (0:00:00.441) 0:00:34.302 ********** 2025-05-29 00:42:39.882512 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-05-29 00:42:39.882608 | orchestrator | 2025-05-29 00:42:39.882623 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-29 00:42:39.882636 | orchestrator | Thursday 29 May 2025 00:42:39 +0000 (0:00:00.382) 0:00:34.685 ********** 2025-05-29 00:42:40.279979 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-05-29 00:42:40.281026 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-05-29 00:42:40.282254 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-05-29 00:42:40.283025 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-05-29 00:42:40.283519 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-05-29 00:42:40.284924 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-05-29 00:42:40.285963 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-05-29 00:42:40.286559 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-05-29 00:42:40.287225 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-05-29 00:42:40.287951 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-05-29 00:42:40.288478 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-05-29 00:42:40.289125 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-05-29 00:42:40.289769 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-05-29 00:42:40.290480 | orchestrator | 2025-05-29 00:42:40.291099 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-29 00:42:40.291428 | orchestrator | Thursday 29 May 2025 00:42:40 +0000 (0:00:00.397) 0:00:35.082 ********** 2025-05-29 00:42:40.492766 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:42:40.493301 | orchestrator | 2025-05-29 00:42:40.494533 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-29 00:42:40.495438 | orchestrator | Thursday 29 May 2025 00:42:40 +0000 (0:00:00.214) 0:00:35.297 ********** 2025-05-29 00:42:40.689922 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:42:40.690280 | orchestrator | 2025-05-29 00:42:40.690858 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-29 00:42:40.691692 | orchestrator | Thursday 29 May 2025 00:42:40 +0000 (0:00:00.195) 0:00:35.492 ********** 2025-05-29 00:42:40.903174 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:42:40.904249 | orchestrator | 2025-05-29 00:42:40.904900 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-29 00:42:40.905773 | orchestrator | Thursday 29 May 2025 00:42:40 +0000 (0:00:00.214) 0:00:35.707 ********** 2025-05-29 00:42:41.159082 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:42:41.159252 | orchestrator | 2025-05-29 00:42:41.159859 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-29 00:42:41.160428 | orchestrator | Thursday 29 May 2025 00:42:41 +0000 (0:00:00.255) 0:00:35.962 ********** 2025-05-29 00:42:41.747063 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:42:41.748048 | orchestrator | 2025-05-29 00:42:41.748682 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-29 00:42:41.749755 | orchestrator | Thursday 29 May 2025 00:42:41 +0000 (0:00:00.589) 0:00:36.551 ********** 2025-05-29 00:42:41.981281 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:42:41.982206 | orchestrator | 2025-05-29 00:42:41.982592 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-29 00:42:41.983414 | orchestrator | Thursday 29 May 2025 00:42:41 +0000 (0:00:00.233) 0:00:36.784 ********** 2025-05-29 00:42:42.174250 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:42:42.174489 | orchestrator | 2025-05-29 00:42:42.175706 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-29 00:42:42.176538 | orchestrator | Thursday 29 May 2025 00:42:42 +0000 (0:00:00.193) 0:00:36.978 ********** 2025-05-29 00:42:42.379887 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:42:42.380579 | orchestrator | 2025-05-29 00:42:42.381178 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-29 00:42:42.382249 | orchestrator | Thursday 29 May 2025 00:42:42 +0000 (0:00:00.206) 0:00:37.184 ********** 2025-05-29 00:42:43.003542 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-05-29 00:42:43.003765 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-05-29 00:42:43.004754 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-05-29 00:42:43.005207 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-05-29 00:42:43.005780 | orchestrator | 2025-05-29 00:42:43.008490 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-29 00:42:43.010120 | orchestrator | Thursday 29 May 2025 00:42:42 +0000 (0:00:00.621) 0:00:37.806 ********** 2025-05-29 00:42:43.207573 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:42:43.207884 | orchestrator | 2025-05-29 00:42:43.209654 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-29 00:42:43.213219 | orchestrator | Thursday 29 May 2025 00:42:43 +0000 (0:00:00.205) 0:00:38.012 ********** 2025-05-29 00:42:43.404958 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:42:43.405395 | orchestrator | 2025-05-29 00:42:43.405907 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-29 00:42:43.407083 | orchestrator | Thursday 29 May 2025 00:42:43 +0000 (0:00:00.197) 0:00:38.209 ********** 2025-05-29 00:42:43.612825 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:42:43.613213 | orchestrator | 2025-05-29 00:42:43.614949 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-29 00:42:43.615581 | orchestrator | Thursday 29 May 2025 00:42:43 +0000 (0:00:00.208) 0:00:38.417 ********** 2025-05-29 00:42:43.824031 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:42:43.824231 | orchestrator | 2025-05-29 00:42:43.825254 | orchestrator | TASK [Set UUIDs for OSD VGs/LVs] *********************************************** 2025-05-29 00:42:43.826306 | orchestrator | Thursday 29 May 2025 00:42:43 +0000 (0:00:00.208) 0:00:38.625 ********** 2025-05-29 00:42:44.019159 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': None}) 2025-05-29 00:42:44.020602 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': None}) 2025-05-29 00:42:44.021369 | orchestrator | 2025-05-29 00:42:44.022376 | orchestrator | TASK [Generate WAL VG names] *************************************************** 2025-05-29 00:42:44.023199 | orchestrator | Thursday 29 May 2025 00:42:44 +0000 (0:00:00.197) 0:00:38.823 ********** 2025-05-29 00:42:44.335068 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:42:44.336366 | orchestrator | 2025-05-29 00:42:44.337276 | orchestrator | TASK [Generate DB VG names] **************************************************** 2025-05-29 00:42:44.340011 | orchestrator | Thursday 29 May 2025 00:42:44 +0000 (0:00:00.315) 0:00:39.139 ********** 2025-05-29 00:42:44.477762 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:42:44.477934 | orchestrator | 2025-05-29 00:42:44.478834 | orchestrator | TASK [Generate shared DB/WAL VG names] ***************************************** 2025-05-29 00:42:44.482213 | orchestrator | Thursday 29 May 2025 00:42:44 +0000 (0:00:00.141) 0:00:39.280 ********** 2025-05-29 00:42:44.622073 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:42:44.622250 | orchestrator | 2025-05-29 00:42:44.622720 | orchestrator | TASK [Define lvm_volumes structures] ******************************************* 2025-05-29 00:42:44.623729 | orchestrator | Thursday 29 May 2025 00:42:44 +0000 (0:00:00.144) 0:00:39.425 ********** 2025-05-29 00:42:44.766645 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:42:44.767289 | orchestrator | 2025-05-29 00:42:44.768067 | orchestrator | TASK [Generate lvm_volumes structure (block only)] ***************************** 2025-05-29 00:42:44.770849 | orchestrator | Thursday 29 May 2025 00:42:44 +0000 (0:00:00.144) 0:00:39.569 ********** 2025-05-29 00:42:44.976802 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'a1850b6b-a1b4-57b7-9f5e-deb9029890df'}}) 2025-05-29 00:42:44.978857 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '05ae814f-03ae-5777-aef4-91f0b0270e90'}}) 2025-05-29 00:42:44.980192 | orchestrator | 2025-05-29 00:42:44.980560 | orchestrator | TASK [Generate lvm_volumes structure (block + db)] ***************************** 2025-05-29 00:42:44.981176 | orchestrator | Thursday 29 May 2025 00:42:44 +0000 (0:00:00.200) 0:00:39.770 ********** 2025-05-29 00:42:45.136008 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'a1850b6b-a1b4-57b7-9f5e-deb9029890df'}})  2025-05-29 00:42:45.137146 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '05ae814f-03ae-5777-aef4-91f0b0270e90'}})  2025-05-29 00:42:45.138509 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:42:45.141356 | orchestrator | 2025-05-29 00:42:45.141408 | orchestrator | TASK [Generate lvm_volumes structure (block + wal)] **************************** 2025-05-29 00:42:45.141422 | orchestrator | Thursday 29 May 2025 00:42:45 +0000 (0:00:00.169) 0:00:39.940 ********** 2025-05-29 00:42:45.306983 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'a1850b6b-a1b4-57b7-9f5e-deb9029890df'}})  2025-05-29 00:42:45.308083 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '05ae814f-03ae-5777-aef4-91f0b0270e90'}})  2025-05-29 00:42:45.309236 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:42:45.309787 | orchestrator | 2025-05-29 00:42:45.310741 | orchestrator | TASK [Generate lvm_volumes structure (block + db + wal)] *********************** 2025-05-29 00:42:45.311442 | orchestrator | Thursday 29 May 2025 00:42:45 +0000 (0:00:00.168) 0:00:40.109 ********** 2025-05-29 00:42:45.477540 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'a1850b6b-a1b4-57b7-9f5e-deb9029890df'}})  2025-05-29 00:42:45.477879 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '05ae814f-03ae-5777-aef4-91f0b0270e90'}})  2025-05-29 00:42:45.479380 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:42:45.479951 | orchestrator | 2025-05-29 00:42:45.480657 | orchestrator | TASK [Compile lvm_volumes] ***************************************************** 2025-05-29 00:42:45.481572 | orchestrator | Thursday 29 May 2025 00:42:45 +0000 (0:00:00.170) 0:00:40.279 ********** 2025-05-29 00:42:45.625125 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:42:45.625315 | orchestrator | 2025-05-29 00:42:45.625677 | orchestrator | TASK [Set OSD devices config data] ********************************************* 2025-05-29 00:42:45.626488 | orchestrator | Thursday 29 May 2025 00:42:45 +0000 (0:00:00.150) 0:00:40.429 ********** 2025-05-29 00:42:45.769940 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:42:45.770173 | orchestrator | 2025-05-29 00:42:45.770674 | orchestrator | TASK [Set DB devices config data] ********************************************** 2025-05-29 00:42:45.771518 | orchestrator | Thursday 29 May 2025 00:42:45 +0000 (0:00:00.142) 0:00:40.572 ********** 2025-05-29 00:42:45.910509 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:42:45.912516 | orchestrator | 2025-05-29 00:42:45.912550 | orchestrator | TASK [Set WAL devices config data] ********************************************* 2025-05-29 00:42:45.915510 | orchestrator | Thursday 29 May 2025 00:42:45 +0000 (0:00:00.141) 0:00:40.714 ********** 2025-05-29 00:42:46.080598 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:42:46.081212 | orchestrator | 2025-05-29 00:42:46.082428 | orchestrator | TASK [Set DB+WAL devices config data] ****************************************** 2025-05-29 00:42:46.083404 | orchestrator | Thursday 29 May 2025 00:42:46 +0000 (0:00:00.168) 0:00:40.883 ********** 2025-05-29 00:42:46.461956 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:42:46.462181 | orchestrator | 2025-05-29 00:42:46.463118 | orchestrator | TASK [Print ceph_osd_devices] ************************************************** 2025-05-29 00:42:46.464742 | orchestrator | Thursday 29 May 2025 00:42:46 +0000 (0:00:00.383) 0:00:41.266 ********** 2025-05-29 00:42:46.597851 | orchestrator | ok: [testbed-node-5] => { 2025-05-29 00:42:46.599006 | orchestrator |  "ceph_osd_devices": { 2025-05-29 00:42:46.600049 | orchestrator |  "sdb": { 2025-05-29 00:42:46.601845 | orchestrator |  "osd_lvm_uuid": "a1850b6b-a1b4-57b7-9f5e-deb9029890df" 2025-05-29 00:42:46.601876 | orchestrator |  }, 2025-05-29 00:42:46.602817 | orchestrator |  "sdc": { 2025-05-29 00:42:46.603260 | orchestrator |  "osd_lvm_uuid": "05ae814f-03ae-5777-aef4-91f0b0270e90" 2025-05-29 00:42:46.603971 | orchestrator |  } 2025-05-29 00:42:46.604599 | orchestrator |  } 2025-05-29 00:42:46.605006 | orchestrator | } 2025-05-29 00:42:46.605603 | orchestrator | 2025-05-29 00:42:46.606756 | orchestrator | TASK [Print WAL devices] ******************************************************* 2025-05-29 00:42:46.607929 | orchestrator | Thursday 29 May 2025 00:42:46 +0000 (0:00:00.134) 0:00:41.401 ********** 2025-05-29 00:42:46.726969 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:42:46.727643 | orchestrator | 2025-05-29 00:42:46.728767 | orchestrator | TASK [Print DB devices] ******************************************************** 2025-05-29 00:42:46.729486 | orchestrator | Thursday 29 May 2025 00:42:46 +0000 (0:00:00.129) 0:00:41.530 ********** 2025-05-29 00:42:46.882780 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:42:46.883527 | orchestrator | 2025-05-29 00:42:46.884907 | orchestrator | TASK [Print shared DB/WAL devices] ********************************************* 2025-05-29 00:42:46.886209 | orchestrator | Thursday 29 May 2025 00:42:46 +0000 (0:00:00.157) 0:00:41.687 ********** 2025-05-29 00:42:47.025214 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:42:47.025511 | orchestrator | 2025-05-29 00:42:47.026580 | orchestrator | TASK [Print configuration data] ************************************************ 2025-05-29 00:42:47.027704 | orchestrator | Thursday 29 May 2025 00:42:47 +0000 (0:00:00.141) 0:00:41.829 ********** 2025-05-29 00:42:47.306850 | orchestrator | changed: [testbed-node-5] => { 2025-05-29 00:42:47.306957 | orchestrator |  "_ceph_configure_lvm_config_data": { 2025-05-29 00:42:47.307655 | orchestrator |  "ceph_osd_devices": { 2025-05-29 00:42:47.308006 | orchestrator |  "sdb": { 2025-05-29 00:42:47.308729 | orchestrator |  "osd_lvm_uuid": "a1850b6b-a1b4-57b7-9f5e-deb9029890df" 2025-05-29 00:42:47.309397 | orchestrator |  }, 2025-05-29 00:42:47.309714 | orchestrator |  "sdc": { 2025-05-29 00:42:47.310468 | orchestrator |  "osd_lvm_uuid": "05ae814f-03ae-5777-aef4-91f0b0270e90" 2025-05-29 00:42:47.311107 | orchestrator |  } 2025-05-29 00:42:47.311437 | orchestrator |  }, 2025-05-29 00:42:47.314499 | orchestrator |  "lvm_volumes": [ 2025-05-29 00:42:47.314957 | orchestrator |  { 2025-05-29 00:42:47.314985 | orchestrator |  "data": "osd-block-a1850b6b-a1b4-57b7-9f5e-deb9029890df", 2025-05-29 00:42:47.315428 | orchestrator |  "data_vg": "ceph-a1850b6b-a1b4-57b7-9f5e-deb9029890df" 2025-05-29 00:42:47.315557 | orchestrator |  }, 2025-05-29 00:42:47.315790 | orchestrator |  { 2025-05-29 00:42:47.316023 | orchestrator |  "data": "osd-block-05ae814f-03ae-5777-aef4-91f0b0270e90", 2025-05-29 00:42:47.316305 | orchestrator |  "data_vg": "ceph-05ae814f-03ae-5777-aef4-91f0b0270e90" 2025-05-29 00:42:47.316678 | orchestrator |  } 2025-05-29 00:42:47.317043 | orchestrator |  ] 2025-05-29 00:42:47.317661 | orchestrator |  } 2025-05-29 00:42:47.317758 | orchestrator | } 2025-05-29 00:42:47.317883 | orchestrator | 2025-05-29 00:42:47.317963 | orchestrator | RUNNING HANDLER [Write configuration file] ************************************* 2025-05-29 00:42:47.318270 | orchestrator | Thursday 29 May 2025 00:42:47 +0000 (0:00:00.282) 0:00:42.111 ********** 2025-05-29 00:42:48.420241 | orchestrator | changed: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-05-29 00:42:48.420690 | orchestrator | 2025-05-29 00:42:48.421760 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-29 00:42:48.421811 | orchestrator | 2025-05-29 00:42:48 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-29 00:42:48.421828 | orchestrator | 2025-05-29 00:42:48 | INFO  | Please wait and do not abort execution. 2025-05-29 00:42:48.422585 | orchestrator | testbed-node-3 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-05-29 00:42:48.422653 | orchestrator | testbed-node-4 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-05-29 00:42:48.422965 | orchestrator | testbed-node-5 : ok=42  changed=2  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-05-29 00:42:48.423172 | orchestrator | 2025-05-29 00:42:48.423577 | orchestrator | 2025-05-29 00:42:48.424229 | orchestrator | 2025-05-29 00:42:48.424966 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-29 00:42:48.425870 | orchestrator | Thursday 29 May 2025 00:42:48 +0000 (0:00:01.112) 0:00:43.223 ********** 2025-05-29 00:42:48.426305 | orchestrator | =============================================================================== 2025-05-29 00:42:48.426950 | orchestrator | Write configuration file ------------------------------------------------ 4.50s 2025-05-29 00:42:48.427370 | orchestrator | Add known links to the list of available block devices ------------------ 1.67s 2025-05-29 00:42:48.427806 | orchestrator | Add known partitions to the list of available block devices ------------- 1.47s 2025-05-29 00:42:48.428089 | orchestrator | Print configuration data ------------------------------------------------ 1.01s 2025-05-29 00:42:48.428498 | orchestrator | Add known links to the list of available block devices ------------------ 0.94s 2025-05-29 00:42:48.428713 | orchestrator | Add known partitions to the list of available block devices ------------- 0.84s 2025-05-29 00:42:48.428900 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 0.76s 2025-05-29 00:42:48.429149 | orchestrator | Add known links to the list of available block devices ------------------ 0.75s 2025-05-29 00:42:48.429587 | orchestrator | Add known links to the list of available block devices ------------------ 0.73s 2025-05-29 00:42:48.429817 | orchestrator | Add known partitions to the list of available block devices ------------- 0.73s 2025-05-29 00:42:48.430113 | orchestrator | Get initial list of available block devices ----------------------------- 0.71s 2025-05-29 00:42:48.430363 | orchestrator | Add known links to the list of available block devices ------------------ 0.69s 2025-05-29 00:42:48.430666 | orchestrator | Generate lvm_volumes structure (block + wal) ---------------------------- 0.68s 2025-05-29 00:42:48.430976 | orchestrator | Set DB+WAL devices config data ------------------------------------------ 0.66s 2025-05-29 00:42:48.431195 | orchestrator | Add known partitions to the list of available block devices ------------- 0.62s 2025-05-29 00:42:48.432153 | orchestrator | Add known partitions to the list of available block devices ------------- 0.62s 2025-05-29 00:42:48.433188 | orchestrator | Add known partitions to the list of available block devices ------------- 0.59s 2025-05-29 00:42:48.433521 | orchestrator | Generate WAL VG names --------------------------------------------------- 0.58s 2025-05-29 00:42:48.434403 | orchestrator | Print ceph_osd_devices -------------------------------------------------- 0.55s 2025-05-29 00:42:48.435160 | orchestrator | Generate lvm_volumes structure (block only) ----------------------------- 0.55s 2025-05-29 00:43:00.618570 | orchestrator | 2025-05-29 00:43:00 | INFO  | Task bcf506b9-9af1-4033-8d4a-d42b50edad43 is running in background. Output coming soon. 2025-05-29 00:43:35.827401 | orchestrator | 2025-05-29 00:43:26 | INFO  | Writing 050-kolla-ceph-rgw-hosts.yml with ceph_rgw_hosts 2025-05-29 00:43:35.827501 | orchestrator | 2025-05-29 00:43:26 | INFO  | Writing 050-infrastructure-cephclient-mons.yml with cephclient_mons 2025-05-29 00:43:35.827525 | orchestrator | 2025-05-29 00:43:26 | INFO  | Writing 050-ceph-cluster-fsid.yml with ceph_cluster_fsid 2025-05-29 00:43:35.827538 | orchestrator | 2025-05-29 00:43:27 | INFO  | Handling group overwrites in 99-overwrite 2025-05-29 00:43:35.827550 | orchestrator | 2025-05-29 00:43:27 | INFO  | Removing group frr:children from 60-generic 2025-05-29 00:43:35.827561 | orchestrator | 2025-05-29 00:43:27 | INFO  | Removing group storage:children from 50-kolla 2025-05-29 00:43:35.827571 | orchestrator | 2025-05-29 00:43:27 | INFO  | Removing group netbird:children from 50-infrastruture 2025-05-29 00:43:35.827582 | orchestrator | 2025-05-29 00:43:27 | INFO  | Removing group ceph-mds from 50-ceph 2025-05-29 00:43:35.827594 | orchestrator | 2025-05-29 00:43:27 | INFO  | Removing group ceph-rgw from 50-ceph 2025-05-29 00:43:35.827605 | orchestrator | 2025-05-29 00:43:27 | INFO  | Handling group overwrites in 20-roles 2025-05-29 00:43:35.827616 | orchestrator | 2025-05-29 00:43:27 | INFO  | Removing group k3s_node from 50-infrastruture 2025-05-29 00:43:35.827627 | orchestrator | 2025-05-29 00:43:27 | INFO  | File 20-netbox not found in /inventory.pre/ 2025-05-29 00:43:35.827638 | orchestrator | 2025-05-29 00:43:35 | INFO  | Writing /inventory/clustershell/ansible.yaml with clustershell groups 2025-05-29 00:43:37.519693 | orchestrator | 2025-05-29 00:43:37 | INFO  | Task dac4761e-4381-43f8-9f92-c2dfa307b301 (ceph-create-lvm-devices) was prepared for execution. 2025-05-29 00:43:37.519804 | orchestrator | 2025-05-29 00:43:37 | INFO  | It takes a moment until task dac4761e-4381-43f8-9f92-c2dfa307b301 (ceph-create-lvm-devices) has been started and output is visible here. 2025-05-29 00:43:40.653595 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.12 2025-05-29 00:43:41.185343 | orchestrator | 2025-05-29 00:43:41.185842 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-05-29 00:43:41.186464 | orchestrator | 2025-05-29 00:43:41.187600 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-05-29 00:43:41.188425 | orchestrator | Thursday 29 May 2025 00:43:41 +0000 (0:00:00.452) 0:00:00.452 ********** 2025-05-29 00:43:41.441887 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] 2025-05-29 00:43:41.441993 | orchestrator | 2025-05-29 00:43:41.442852 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-05-29 00:43:41.443322 | orchestrator | Thursday 29 May 2025 00:43:41 +0000 (0:00:00.259) 0:00:00.711 ********** 2025-05-29 00:43:41.716410 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:43:41.716514 | orchestrator | 2025-05-29 00:43:41.717825 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-29 00:43:41.718145 | orchestrator | Thursday 29 May 2025 00:43:41 +0000 (0:00:00.270) 0:00:00.982 ********** 2025-05-29 00:43:42.451646 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop0) 2025-05-29 00:43:42.452593 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop1) 2025-05-29 00:43:42.452685 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop2) 2025-05-29 00:43:42.452701 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop3) 2025-05-29 00:43:42.453545 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop4) 2025-05-29 00:43:42.453614 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop5) 2025-05-29 00:43:42.454093 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop6) 2025-05-29 00:43:42.454512 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=loop7) 2025-05-29 00:43:42.454854 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sda) 2025-05-29 00:43:42.456980 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdb) 2025-05-29 00:43:42.457027 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdc) 2025-05-29 00:43:42.457049 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sdd) 2025-05-29 00:43:42.457070 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-3 => (item=sr0) 2025-05-29 00:43:42.457083 | orchestrator | 2025-05-29 00:43:42.457096 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-29 00:43:42.457108 | orchestrator | Thursday 29 May 2025 00:43:42 +0000 (0:00:00.738) 0:00:01.721 ********** 2025-05-29 00:43:42.662831 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:43:42.663002 | orchestrator | 2025-05-29 00:43:42.663347 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-29 00:43:42.663931 | orchestrator | Thursday 29 May 2025 00:43:42 +0000 (0:00:00.213) 0:00:01.935 ********** 2025-05-29 00:43:42.872156 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:43:42.872393 | orchestrator | 2025-05-29 00:43:42.872632 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-29 00:43:42.873039 | orchestrator | Thursday 29 May 2025 00:43:42 +0000 (0:00:00.205) 0:00:02.140 ********** 2025-05-29 00:43:43.068162 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:43:43.068711 | orchestrator | 2025-05-29 00:43:43.069198 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-29 00:43:43.069464 | orchestrator | Thursday 29 May 2025 00:43:43 +0000 (0:00:00.199) 0:00:02.340 ********** 2025-05-29 00:43:43.268037 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:43:43.268138 | orchestrator | 2025-05-29 00:43:43.269162 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-29 00:43:43.269431 | orchestrator | Thursday 29 May 2025 00:43:43 +0000 (0:00:00.198) 0:00:02.539 ********** 2025-05-29 00:43:43.484713 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:43:43.484920 | orchestrator | 2025-05-29 00:43:43.485986 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-29 00:43:43.487234 | orchestrator | Thursday 29 May 2025 00:43:43 +0000 (0:00:00.217) 0:00:02.756 ********** 2025-05-29 00:43:43.684119 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:43:43.684700 | orchestrator | 2025-05-29 00:43:43.685319 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-29 00:43:43.686605 | orchestrator | Thursday 29 May 2025 00:43:43 +0000 (0:00:00.199) 0:00:02.956 ********** 2025-05-29 00:43:43.886681 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:43:43.886893 | orchestrator | 2025-05-29 00:43:43.887931 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-29 00:43:43.888560 | orchestrator | Thursday 29 May 2025 00:43:43 +0000 (0:00:00.201) 0:00:03.157 ********** 2025-05-29 00:43:44.077541 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:43:44.085645 | orchestrator | 2025-05-29 00:43:44.086088 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-29 00:43:44.086679 | orchestrator | Thursday 29 May 2025 00:43:44 +0000 (0:00:00.191) 0:00:03.348 ********** 2025-05-29 00:43:44.737814 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_3e9d3af7-34b1-4fa5-b4a2-fbeb047fa155) 2025-05-29 00:43:44.738147 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_3e9d3af7-34b1-4fa5-b4a2-fbeb047fa155) 2025-05-29 00:43:44.739025 | orchestrator | 2025-05-29 00:43:44.740354 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-29 00:43:44.740883 | orchestrator | Thursday 29 May 2025 00:43:44 +0000 (0:00:00.659) 0:00:04.008 ********** 2025-05-29 00:43:45.382188 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_172ad3b6-4b22-4cdf-a28e-ac5da2182fda) 2025-05-29 00:43:45.382608 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_172ad3b6-4b22-4cdf-a28e-ac5da2182fda) 2025-05-29 00:43:45.383601 | orchestrator | 2025-05-29 00:43:45.384558 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-29 00:43:45.385556 | orchestrator | Thursday 29 May 2025 00:43:45 +0000 (0:00:00.641) 0:00:04.650 ********** 2025-05-29 00:43:45.808743 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_81c2fe1f-38cc-49f7-ae7d-3d898626253d) 2025-05-29 00:43:45.808849 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_81c2fe1f-38cc-49f7-ae7d-3d898626253d) 2025-05-29 00:43:45.808922 | orchestrator | 2025-05-29 00:43:45.809094 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-29 00:43:45.810295 | orchestrator | Thursday 29 May 2025 00:43:45 +0000 (0:00:00.429) 0:00:05.080 ********** 2025-05-29 00:43:46.247707 | orchestrator | ok: [testbed-node-3] => (item=scsi-0QEMU_QEMU_HARDDISK_872f8c6a-38b8-4598-af69-d174e2488207) 2025-05-29 00:43:46.247888 | orchestrator | ok: [testbed-node-3] => (item=scsi-SQEMU_QEMU_HARDDISK_872f8c6a-38b8-4598-af69-d174e2488207) 2025-05-29 00:43:46.248838 | orchestrator | 2025-05-29 00:43:46.251634 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-29 00:43:46.251731 | orchestrator | Thursday 29 May 2025 00:43:46 +0000 (0:00:00.437) 0:00:05.518 ********** 2025-05-29 00:43:46.570334 | orchestrator | ok: [testbed-node-3] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-05-29 00:43:46.570441 | orchestrator | 2025-05-29 00:43:46.570694 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-29 00:43:46.572301 | orchestrator | Thursday 29 May 2025 00:43:46 +0000 (0:00:00.323) 0:00:05.841 ********** 2025-05-29 00:43:47.043512 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop0) 2025-05-29 00:43:47.046700 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop1) 2025-05-29 00:43:47.046897 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop2) 2025-05-29 00:43:47.047796 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop3) 2025-05-29 00:43:47.048146 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop4) 2025-05-29 00:43:47.048659 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop5) 2025-05-29 00:43:47.048994 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop6) 2025-05-29 00:43:47.049368 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=loop7) 2025-05-29 00:43:47.049963 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sda) 2025-05-29 00:43:47.050289 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdb) 2025-05-29 00:43:47.052064 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdc) 2025-05-29 00:43:47.054970 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sdd) 2025-05-29 00:43:47.055630 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-3 => (item=sr0) 2025-05-29 00:43:47.056094 | orchestrator | 2025-05-29 00:43:47.056585 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-29 00:43:47.056998 | orchestrator | Thursday 29 May 2025 00:43:47 +0000 (0:00:00.470) 0:00:06.312 ********** 2025-05-29 00:43:47.239404 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:43:47.240013 | orchestrator | 2025-05-29 00:43:47.240601 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-29 00:43:47.241330 | orchestrator | Thursday 29 May 2025 00:43:47 +0000 (0:00:00.199) 0:00:06.511 ********** 2025-05-29 00:43:47.442184 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:43:47.442337 | orchestrator | 2025-05-29 00:43:47.443140 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-29 00:43:47.443702 | orchestrator | Thursday 29 May 2025 00:43:47 +0000 (0:00:00.202) 0:00:06.713 ********** 2025-05-29 00:43:47.653956 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:43:47.654548 | orchestrator | 2025-05-29 00:43:47.655241 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-29 00:43:47.655935 | orchestrator | Thursday 29 May 2025 00:43:47 +0000 (0:00:00.211) 0:00:06.925 ********** 2025-05-29 00:43:47.854791 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:43:47.856089 | orchestrator | 2025-05-29 00:43:47.856584 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-29 00:43:47.857190 | orchestrator | Thursday 29 May 2025 00:43:47 +0000 (0:00:00.201) 0:00:07.126 ********** 2025-05-29 00:43:48.392592 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:43:48.393061 | orchestrator | 2025-05-29 00:43:48.393864 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-29 00:43:48.395119 | orchestrator | Thursday 29 May 2025 00:43:48 +0000 (0:00:00.537) 0:00:07.664 ********** 2025-05-29 00:43:48.589151 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:43:48.589326 | orchestrator | 2025-05-29 00:43:48.589939 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-29 00:43:48.589965 | orchestrator | Thursday 29 May 2025 00:43:48 +0000 (0:00:00.195) 0:00:07.859 ********** 2025-05-29 00:43:48.790651 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:43:48.792943 | orchestrator | 2025-05-29 00:43:48.793698 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-29 00:43:48.794398 | orchestrator | Thursday 29 May 2025 00:43:48 +0000 (0:00:00.202) 0:00:08.062 ********** 2025-05-29 00:43:48.991139 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:43:48.991785 | orchestrator | 2025-05-29 00:43:48.992748 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-29 00:43:48.993250 | orchestrator | Thursday 29 May 2025 00:43:48 +0000 (0:00:00.199) 0:00:08.261 ********** 2025-05-29 00:43:49.686918 | orchestrator | ok: [testbed-node-3] => (item=sda1) 2025-05-29 00:43:49.687101 | orchestrator | ok: [testbed-node-3] => (item=sda14) 2025-05-29 00:43:49.688011 | orchestrator | ok: [testbed-node-3] => (item=sda15) 2025-05-29 00:43:49.688992 | orchestrator | ok: [testbed-node-3] => (item=sda16) 2025-05-29 00:43:49.689475 | orchestrator | 2025-05-29 00:43:49.690275 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-29 00:43:49.690856 | orchestrator | Thursday 29 May 2025 00:43:49 +0000 (0:00:00.697) 0:00:08.958 ********** 2025-05-29 00:43:49.885689 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:43:49.885867 | orchestrator | 2025-05-29 00:43:49.886076 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-29 00:43:49.886405 | orchestrator | Thursday 29 May 2025 00:43:49 +0000 (0:00:00.199) 0:00:09.157 ********** 2025-05-29 00:43:50.084552 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:43:50.085467 | orchestrator | 2025-05-29 00:43:50.091538 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-29 00:43:50.091727 | orchestrator | Thursday 29 May 2025 00:43:50 +0000 (0:00:00.197) 0:00:09.355 ********** 2025-05-29 00:43:50.269284 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:43:50.269406 | orchestrator | 2025-05-29 00:43:50.269510 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-29 00:43:50.270264 | orchestrator | Thursday 29 May 2025 00:43:50 +0000 (0:00:00.183) 0:00:09.539 ********** 2025-05-29 00:43:50.454915 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:43:50.455023 | orchestrator | 2025-05-29 00:43:50.455734 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-05-29 00:43:50.456331 | orchestrator | Thursday 29 May 2025 00:43:50 +0000 (0:00:00.186) 0:00:09.726 ********** 2025-05-29 00:43:50.595162 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:43:50.595854 | orchestrator | 2025-05-29 00:43:50.596151 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-05-29 00:43:50.597986 | orchestrator | Thursday 29 May 2025 00:43:50 +0000 (0:00:00.139) 0:00:09.865 ********** 2025-05-29 00:43:50.798500 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'b02a0e5a-ac94-54a1-88a1-38ba26e145f6'}}) 2025-05-29 00:43:50.799444 | orchestrator | ok: [testbed-node-3] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '81bd5020-0460-5411-80bb-35101e63cce8'}}) 2025-05-29 00:43:50.799483 | orchestrator | 2025-05-29 00:43:50.799753 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-05-29 00:43:50.800054 | orchestrator | Thursday 29 May 2025 00:43:50 +0000 (0:00:00.201) 0:00:10.067 ********** 2025-05-29 00:43:53.082288 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-b02a0e5a-ac94-54a1-88a1-38ba26e145f6', 'data_vg': 'ceph-b02a0e5a-ac94-54a1-88a1-38ba26e145f6'}) 2025-05-29 00:43:53.083016 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-81bd5020-0460-5411-80bb-35101e63cce8', 'data_vg': 'ceph-81bd5020-0460-5411-80bb-35101e63cce8'}) 2025-05-29 00:43:53.083410 | orchestrator | 2025-05-29 00:43:53.083840 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-05-29 00:43:53.084374 | orchestrator | Thursday 29 May 2025 00:43:53 +0000 (0:00:02.286) 0:00:12.353 ********** 2025-05-29 00:43:53.248360 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b02a0e5a-ac94-54a1-88a1-38ba26e145f6', 'data_vg': 'ceph-b02a0e5a-ac94-54a1-88a1-38ba26e145f6'})  2025-05-29 00:43:53.248502 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-81bd5020-0460-5411-80bb-35101e63cce8', 'data_vg': 'ceph-81bd5020-0460-5411-80bb-35101e63cce8'})  2025-05-29 00:43:53.249077 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:43:53.249740 | orchestrator | 2025-05-29 00:43:53.250471 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-05-29 00:43:53.253066 | orchestrator | Thursday 29 May 2025 00:43:53 +0000 (0:00:00.166) 0:00:12.520 ********** 2025-05-29 00:43:54.711280 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-b02a0e5a-ac94-54a1-88a1-38ba26e145f6', 'data_vg': 'ceph-b02a0e5a-ac94-54a1-88a1-38ba26e145f6'}) 2025-05-29 00:43:54.711410 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-81bd5020-0460-5411-80bb-35101e63cce8', 'data_vg': 'ceph-81bd5020-0460-5411-80bb-35101e63cce8'}) 2025-05-29 00:43:54.711824 | orchestrator | 2025-05-29 00:43:54.712898 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-05-29 00:43:54.713562 | orchestrator | Thursday 29 May 2025 00:43:54 +0000 (0:00:01.461) 0:00:13.981 ********** 2025-05-29 00:43:54.885900 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b02a0e5a-ac94-54a1-88a1-38ba26e145f6', 'data_vg': 'ceph-b02a0e5a-ac94-54a1-88a1-38ba26e145f6'})  2025-05-29 00:43:54.886084 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-81bd5020-0460-5411-80bb-35101e63cce8', 'data_vg': 'ceph-81bd5020-0460-5411-80bb-35101e63cce8'})  2025-05-29 00:43:54.887281 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:43:54.888511 | orchestrator | 2025-05-29 00:43:54.890106 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-05-29 00:43:54.890494 | orchestrator | Thursday 29 May 2025 00:43:54 +0000 (0:00:00.174) 0:00:14.156 ********** 2025-05-29 00:43:55.024018 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:43:55.024358 | orchestrator | 2025-05-29 00:43:55.025399 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-05-29 00:43:55.026431 | orchestrator | Thursday 29 May 2025 00:43:55 +0000 (0:00:00.138) 0:00:14.295 ********** 2025-05-29 00:43:55.193274 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b02a0e5a-ac94-54a1-88a1-38ba26e145f6', 'data_vg': 'ceph-b02a0e5a-ac94-54a1-88a1-38ba26e145f6'})  2025-05-29 00:43:55.193452 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-81bd5020-0460-5411-80bb-35101e63cce8', 'data_vg': 'ceph-81bd5020-0460-5411-80bb-35101e63cce8'})  2025-05-29 00:43:55.194316 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:43:55.197253 | orchestrator | 2025-05-29 00:43:55.197554 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-05-29 00:43:55.198675 | orchestrator | Thursday 29 May 2025 00:43:55 +0000 (0:00:00.168) 0:00:14.463 ********** 2025-05-29 00:43:55.333308 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:43:55.333665 | orchestrator | 2025-05-29 00:43:55.334788 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-05-29 00:43:55.335862 | orchestrator | Thursday 29 May 2025 00:43:55 +0000 (0:00:00.141) 0:00:14.605 ********** 2025-05-29 00:43:55.496125 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b02a0e5a-ac94-54a1-88a1-38ba26e145f6', 'data_vg': 'ceph-b02a0e5a-ac94-54a1-88a1-38ba26e145f6'})  2025-05-29 00:43:55.496382 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-81bd5020-0460-5411-80bb-35101e63cce8', 'data_vg': 'ceph-81bd5020-0460-5411-80bb-35101e63cce8'})  2025-05-29 00:43:55.497736 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:43:55.498991 | orchestrator | 2025-05-29 00:43:55.499143 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-05-29 00:43:55.499995 | orchestrator | Thursday 29 May 2025 00:43:55 +0000 (0:00:00.162) 0:00:14.767 ********** 2025-05-29 00:43:55.780400 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:43:55.781305 | orchestrator | 2025-05-29 00:43:55.782550 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-05-29 00:43:55.783289 | orchestrator | Thursday 29 May 2025 00:43:55 +0000 (0:00:00.284) 0:00:15.052 ********** 2025-05-29 00:43:55.955011 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b02a0e5a-ac94-54a1-88a1-38ba26e145f6', 'data_vg': 'ceph-b02a0e5a-ac94-54a1-88a1-38ba26e145f6'})  2025-05-29 00:43:55.956074 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-81bd5020-0460-5411-80bb-35101e63cce8', 'data_vg': 'ceph-81bd5020-0460-5411-80bb-35101e63cce8'})  2025-05-29 00:43:55.956607 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:43:55.957086 | orchestrator | 2025-05-29 00:43:55.957534 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-05-29 00:43:55.957944 | orchestrator | Thursday 29 May 2025 00:43:55 +0000 (0:00:00.174) 0:00:15.226 ********** 2025-05-29 00:43:56.112410 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:43:56.113429 | orchestrator | 2025-05-29 00:43:56.114614 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-05-29 00:43:56.115612 | orchestrator | Thursday 29 May 2025 00:43:56 +0000 (0:00:00.158) 0:00:15.385 ********** 2025-05-29 00:43:56.291248 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b02a0e5a-ac94-54a1-88a1-38ba26e145f6', 'data_vg': 'ceph-b02a0e5a-ac94-54a1-88a1-38ba26e145f6'})  2025-05-29 00:43:56.292809 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-81bd5020-0460-5411-80bb-35101e63cce8', 'data_vg': 'ceph-81bd5020-0460-5411-80bb-35101e63cce8'})  2025-05-29 00:43:56.293271 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:43:56.294868 | orchestrator | 2025-05-29 00:43:56.295804 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-05-29 00:43:56.296488 | orchestrator | Thursday 29 May 2025 00:43:56 +0000 (0:00:00.177) 0:00:15.562 ********** 2025-05-29 00:43:56.459917 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b02a0e5a-ac94-54a1-88a1-38ba26e145f6', 'data_vg': 'ceph-b02a0e5a-ac94-54a1-88a1-38ba26e145f6'})  2025-05-29 00:43:56.460612 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-81bd5020-0460-5411-80bb-35101e63cce8', 'data_vg': 'ceph-81bd5020-0460-5411-80bb-35101e63cce8'})  2025-05-29 00:43:56.461330 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:43:56.462240 | orchestrator | 2025-05-29 00:43:56.462568 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-05-29 00:43:56.463363 | orchestrator | Thursday 29 May 2025 00:43:56 +0000 (0:00:00.169) 0:00:15.732 ********** 2025-05-29 00:43:56.623333 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b02a0e5a-ac94-54a1-88a1-38ba26e145f6', 'data_vg': 'ceph-b02a0e5a-ac94-54a1-88a1-38ba26e145f6'})  2025-05-29 00:43:56.623521 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-81bd5020-0460-5411-80bb-35101e63cce8', 'data_vg': 'ceph-81bd5020-0460-5411-80bb-35101e63cce8'})  2025-05-29 00:43:56.624358 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:43:56.624906 | orchestrator | 2025-05-29 00:43:56.625638 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-05-29 00:43:56.627649 | orchestrator | Thursday 29 May 2025 00:43:56 +0000 (0:00:00.162) 0:00:15.895 ********** 2025-05-29 00:43:56.755834 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:43:56.756006 | orchestrator | 2025-05-29 00:43:56.756616 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-05-29 00:43:56.757312 | orchestrator | Thursday 29 May 2025 00:43:56 +0000 (0:00:00.132) 0:00:16.027 ********** 2025-05-29 00:43:56.897925 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:43:56.899652 | orchestrator | 2025-05-29 00:43:56.900356 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-05-29 00:43:56.901145 | orchestrator | Thursday 29 May 2025 00:43:56 +0000 (0:00:00.141) 0:00:16.169 ********** 2025-05-29 00:43:57.026522 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:43:57.026734 | orchestrator | 2025-05-29 00:43:57.027756 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-05-29 00:43:57.028937 | orchestrator | Thursday 29 May 2025 00:43:57 +0000 (0:00:00.128) 0:00:16.297 ********** 2025-05-29 00:43:57.161965 | orchestrator | ok: [testbed-node-3] => { 2025-05-29 00:43:57.162796 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-05-29 00:43:57.162985 | orchestrator | } 2025-05-29 00:43:57.163710 | orchestrator | 2025-05-29 00:43:57.164401 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-05-29 00:43:57.165334 | orchestrator | Thursday 29 May 2025 00:43:57 +0000 (0:00:00.136) 0:00:16.434 ********** 2025-05-29 00:43:57.300101 | orchestrator | ok: [testbed-node-3] => { 2025-05-29 00:43:57.300867 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-05-29 00:43:57.300990 | orchestrator | } 2025-05-29 00:43:57.301694 | orchestrator | 2025-05-29 00:43:57.302535 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-05-29 00:43:57.304734 | orchestrator | Thursday 29 May 2025 00:43:57 +0000 (0:00:00.138) 0:00:16.572 ********** 2025-05-29 00:43:57.441113 | orchestrator | ok: [testbed-node-3] => { 2025-05-29 00:43:57.441263 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-05-29 00:43:57.441404 | orchestrator | } 2025-05-29 00:43:57.442107 | orchestrator | 2025-05-29 00:43:57.442437 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-05-29 00:43:57.443486 | orchestrator | Thursday 29 May 2025 00:43:57 +0000 (0:00:00.140) 0:00:16.713 ********** 2025-05-29 00:43:58.347140 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:43:58.347315 | orchestrator | 2025-05-29 00:43:58.349224 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-05-29 00:43:58.349667 | orchestrator | Thursday 29 May 2025 00:43:58 +0000 (0:00:00.903) 0:00:17.617 ********** 2025-05-29 00:43:58.874749 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:43:58.875230 | orchestrator | 2025-05-29 00:43:58.875877 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-05-29 00:43:58.876555 | orchestrator | Thursday 29 May 2025 00:43:58 +0000 (0:00:00.529) 0:00:18.146 ********** 2025-05-29 00:43:59.422069 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:43:59.422174 | orchestrator | 2025-05-29 00:43:59.422347 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-05-29 00:43:59.422369 | orchestrator | Thursday 29 May 2025 00:43:59 +0000 (0:00:00.546) 0:00:18.693 ********** 2025-05-29 00:43:59.569247 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:43:59.569980 | orchestrator | 2025-05-29 00:43:59.570366 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-05-29 00:43:59.571215 | orchestrator | Thursday 29 May 2025 00:43:59 +0000 (0:00:00.147) 0:00:18.840 ********** 2025-05-29 00:43:59.677057 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:43:59.677675 | orchestrator | 2025-05-29 00:43:59.678102 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-05-29 00:43:59.678726 | orchestrator | Thursday 29 May 2025 00:43:59 +0000 (0:00:00.107) 0:00:18.947 ********** 2025-05-29 00:43:59.791934 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:43:59.793601 | orchestrator | 2025-05-29 00:43:59.794629 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-05-29 00:43:59.795728 | orchestrator | Thursday 29 May 2025 00:43:59 +0000 (0:00:00.115) 0:00:19.063 ********** 2025-05-29 00:43:59.937282 | orchestrator | ok: [testbed-node-3] => { 2025-05-29 00:43:59.938102 | orchestrator |  "vgs_report": { 2025-05-29 00:43:59.939471 | orchestrator |  "vg": [] 2025-05-29 00:43:59.940454 | orchestrator |  } 2025-05-29 00:43:59.941530 | orchestrator | } 2025-05-29 00:43:59.942490 | orchestrator | 2025-05-29 00:43:59.942889 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-05-29 00:43:59.943489 | orchestrator | Thursday 29 May 2025 00:43:59 +0000 (0:00:00.145) 0:00:19.208 ********** 2025-05-29 00:44:00.078882 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:44:00.079071 | orchestrator | 2025-05-29 00:44:00.079900 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-05-29 00:44:00.080718 | orchestrator | Thursday 29 May 2025 00:44:00 +0000 (0:00:00.139) 0:00:19.348 ********** 2025-05-29 00:44:00.244879 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:44:00.245467 | orchestrator | 2025-05-29 00:44:00.246499 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-05-29 00:44:00.248126 | orchestrator | Thursday 29 May 2025 00:44:00 +0000 (0:00:00.167) 0:00:19.515 ********** 2025-05-29 00:44:00.388587 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:44:00.390578 | orchestrator | 2025-05-29 00:44:00.392008 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-05-29 00:44:00.392960 | orchestrator | Thursday 29 May 2025 00:44:00 +0000 (0:00:00.143) 0:00:19.659 ********** 2025-05-29 00:44:00.708952 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:44:00.709735 | orchestrator | 2025-05-29 00:44:00.711046 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-05-29 00:44:00.711697 | orchestrator | Thursday 29 May 2025 00:44:00 +0000 (0:00:00.320) 0:00:19.980 ********** 2025-05-29 00:44:00.863250 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:44:00.864090 | orchestrator | 2025-05-29 00:44:00.865016 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-05-29 00:44:00.865353 | orchestrator | Thursday 29 May 2025 00:44:00 +0000 (0:00:00.154) 0:00:20.135 ********** 2025-05-29 00:44:00.991447 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:44:00.992221 | orchestrator | 2025-05-29 00:44:00.993294 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-05-29 00:44:00.993534 | orchestrator | Thursday 29 May 2025 00:44:00 +0000 (0:00:00.128) 0:00:20.263 ********** 2025-05-29 00:44:01.142337 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:44:01.142570 | orchestrator | 2025-05-29 00:44:01.144290 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-05-29 00:44:01.144958 | orchestrator | Thursday 29 May 2025 00:44:01 +0000 (0:00:00.147) 0:00:20.410 ********** 2025-05-29 00:44:01.284418 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:44:01.285504 | orchestrator | 2025-05-29 00:44:01.286706 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-05-29 00:44:01.287731 | orchestrator | Thursday 29 May 2025 00:44:01 +0000 (0:00:00.145) 0:00:20.556 ********** 2025-05-29 00:44:01.425541 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:44:01.426208 | orchestrator | 2025-05-29 00:44:01.426977 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-05-29 00:44:01.428458 | orchestrator | Thursday 29 May 2025 00:44:01 +0000 (0:00:00.139) 0:00:20.695 ********** 2025-05-29 00:44:01.571635 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:44:01.571813 | orchestrator | 2025-05-29 00:44:01.573913 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-05-29 00:44:01.574607 | orchestrator | Thursday 29 May 2025 00:44:01 +0000 (0:00:00.147) 0:00:20.842 ********** 2025-05-29 00:44:01.713927 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:44:01.714990 | orchestrator | 2025-05-29 00:44:01.715898 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-05-29 00:44:01.717055 | orchestrator | Thursday 29 May 2025 00:44:01 +0000 (0:00:00.142) 0:00:20.985 ********** 2025-05-29 00:44:01.847895 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:44:01.847982 | orchestrator | 2025-05-29 00:44:01.847996 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-05-29 00:44:01.848082 | orchestrator | Thursday 29 May 2025 00:44:01 +0000 (0:00:00.135) 0:00:21.120 ********** 2025-05-29 00:44:01.982507 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:44:01.982600 | orchestrator | 2025-05-29 00:44:01.983487 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-05-29 00:44:01.985341 | orchestrator | Thursday 29 May 2025 00:44:01 +0000 (0:00:00.132) 0:00:21.252 ********** 2025-05-29 00:44:02.106394 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:44:02.106525 | orchestrator | 2025-05-29 00:44:02.108207 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-05-29 00:44:02.109014 | orchestrator | Thursday 29 May 2025 00:44:02 +0000 (0:00:00.123) 0:00:21.376 ********** 2025-05-29 00:44:02.288523 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b02a0e5a-ac94-54a1-88a1-38ba26e145f6', 'data_vg': 'ceph-b02a0e5a-ac94-54a1-88a1-38ba26e145f6'})  2025-05-29 00:44:02.288629 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-81bd5020-0460-5411-80bb-35101e63cce8', 'data_vg': 'ceph-81bd5020-0460-5411-80bb-35101e63cce8'})  2025-05-29 00:44:02.288818 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:44:02.288918 | orchestrator | 2025-05-29 00:44:02.289707 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-05-29 00:44:02.289792 | orchestrator | Thursday 29 May 2025 00:44:02 +0000 (0:00:00.185) 0:00:21.561 ********** 2025-05-29 00:44:02.639671 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b02a0e5a-ac94-54a1-88a1-38ba26e145f6', 'data_vg': 'ceph-b02a0e5a-ac94-54a1-88a1-38ba26e145f6'})  2025-05-29 00:44:02.640522 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-81bd5020-0460-5411-80bb-35101e63cce8', 'data_vg': 'ceph-81bd5020-0460-5411-80bb-35101e63cce8'})  2025-05-29 00:44:02.641351 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:44:02.643084 | orchestrator | 2025-05-29 00:44:02.644374 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-05-29 00:44:02.645170 | orchestrator | Thursday 29 May 2025 00:44:02 +0000 (0:00:00.350) 0:00:21.911 ********** 2025-05-29 00:44:02.817792 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b02a0e5a-ac94-54a1-88a1-38ba26e145f6', 'data_vg': 'ceph-b02a0e5a-ac94-54a1-88a1-38ba26e145f6'})  2025-05-29 00:44:02.818504 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-81bd5020-0460-5411-80bb-35101e63cce8', 'data_vg': 'ceph-81bd5020-0460-5411-80bb-35101e63cce8'})  2025-05-29 00:44:02.818812 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:44:02.819429 | orchestrator | 2025-05-29 00:44:02.819917 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-05-29 00:44:02.820779 | orchestrator | Thursday 29 May 2025 00:44:02 +0000 (0:00:00.178) 0:00:22.090 ********** 2025-05-29 00:44:02.984069 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b02a0e5a-ac94-54a1-88a1-38ba26e145f6', 'data_vg': 'ceph-b02a0e5a-ac94-54a1-88a1-38ba26e145f6'})  2025-05-29 00:44:02.984169 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-81bd5020-0460-5411-80bb-35101e63cce8', 'data_vg': 'ceph-81bd5020-0460-5411-80bb-35101e63cce8'})  2025-05-29 00:44:02.984615 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:44:02.985031 | orchestrator | 2025-05-29 00:44:02.985667 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-05-29 00:44:02.985956 | orchestrator | Thursday 29 May 2025 00:44:02 +0000 (0:00:00.165) 0:00:22.255 ********** 2025-05-29 00:44:03.145247 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b02a0e5a-ac94-54a1-88a1-38ba26e145f6', 'data_vg': 'ceph-b02a0e5a-ac94-54a1-88a1-38ba26e145f6'})  2025-05-29 00:44:03.145766 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-81bd5020-0460-5411-80bb-35101e63cce8', 'data_vg': 'ceph-81bd5020-0460-5411-80bb-35101e63cce8'})  2025-05-29 00:44:03.146733 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:44:03.148285 | orchestrator | 2025-05-29 00:44:03.149041 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-05-29 00:44:03.149477 | orchestrator | Thursday 29 May 2025 00:44:03 +0000 (0:00:00.160) 0:00:22.416 ********** 2025-05-29 00:44:03.330467 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b02a0e5a-ac94-54a1-88a1-38ba26e145f6', 'data_vg': 'ceph-b02a0e5a-ac94-54a1-88a1-38ba26e145f6'})  2025-05-29 00:44:03.331292 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-81bd5020-0460-5411-80bb-35101e63cce8', 'data_vg': 'ceph-81bd5020-0460-5411-80bb-35101e63cce8'})  2025-05-29 00:44:03.331738 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:44:03.332529 | orchestrator | 2025-05-29 00:44:03.335932 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-05-29 00:44:03.336011 | orchestrator | Thursday 29 May 2025 00:44:03 +0000 (0:00:00.185) 0:00:22.602 ********** 2025-05-29 00:44:03.501699 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b02a0e5a-ac94-54a1-88a1-38ba26e145f6', 'data_vg': 'ceph-b02a0e5a-ac94-54a1-88a1-38ba26e145f6'})  2025-05-29 00:44:03.501890 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-81bd5020-0460-5411-80bb-35101e63cce8', 'data_vg': 'ceph-81bd5020-0460-5411-80bb-35101e63cce8'})  2025-05-29 00:44:03.502593 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:44:03.503317 | orchestrator | 2025-05-29 00:44:03.504289 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-05-29 00:44:03.504944 | orchestrator | Thursday 29 May 2025 00:44:03 +0000 (0:00:00.171) 0:00:22.773 ********** 2025-05-29 00:44:03.675111 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b02a0e5a-ac94-54a1-88a1-38ba26e145f6', 'data_vg': 'ceph-b02a0e5a-ac94-54a1-88a1-38ba26e145f6'})  2025-05-29 00:44:03.675249 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-81bd5020-0460-5411-80bb-35101e63cce8', 'data_vg': 'ceph-81bd5020-0460-5411-80bb-35101e63cce8'})  2025-05-29 00:44:03.676145 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:44:03.676781 | orchestrator | 2025-05-29 00:44:03.677041 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-05-29 00:44:03.677595 | orchestrator | Thursday 29 May 2025 00:44:03 +0000 (0:00:00.171) 0:00:22.944 ********** 2025-05-29 00:44:04.245337 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:44:04.246081 | orchestrator | 2025-05-29 00:44:04.246117 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-05-29 00:44:04.246771 | orchestrator | Thursday 29 May 2025 00:44:04 +0000 (0:00:00.571) 0:00:23.516 ********** 2025-05-29 00:44:04.788416 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:44:04.789720 | orchestrator | 2025-05-29 00:44:04.789960 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-05-29 00:44:04.790938 | orchestrator | Thursday 29 May 2025 00:44:04 +0000 (0:00:00.544) 0:00:24.060 ********** 2025-05-29 00:44:04.940874 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:44:04.941555 | orchestrator | 2025-05-29 00:44:04.942695 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-05-29 00:44:04.943507 | orchestrator | Thursday 29 May 2025 00:44:04 +0000 (0:00:00.151) 0:00:24.212 ********** 2025-05-29 00:44:05.133227 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-81bd5020-0460-5411-80bb-35101e63cce8', 'vg_name': 'ceph-81bd5020-0460-5411-80bb-35101e63cce8'}) 2025-05-29 00:44:05.133898 | orchestrator | ok: [testbed-node-3] => (item={'lv_name': 'osd-block-b02a0e5a-ac94-54a1-88a1-38ba26e145f6', 'vg_name': 'ceph-b02a0e5a-ac94-54a1-88a1-38ba26e145f6'}) 2025-05-29 00:44:05.134647 | orchestrator | 2025-05-29 00:44:05.137341 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-05-29 00:44:05.138343 | orchestrator | Thursday 29 May 2025 00:44:05 +0000 (0:00:00.191) 0:00:24.403 ********** 2025-05-29 00:44:05.505058 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b02a0e5a-ac94-54a1-88a1-38ba26e145f6', 'data_vg': 'ceph-b02a0e5a-ac94-54a1-88a1-38ba26e145f6'})  2025-05-29 00:44:05.505435 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-81bd5020-0460-5411-80bb-35101e63cce8', 'data_vg': 'ceph-81bd5020-0460-5411-80bb-35101e63cce8'})  2025-05-29 00:44:05.507027 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:44:05.507391 | orchestrator | 2025-05-29 00:44:05.511095 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-05-29 00:44:05.512097 | orchestrator | Thursday 29 May 2025 00:44:05 +0000 (0:00:00.371) 0:00:24.775 ********** 2025-05-29 00:44:05.676399 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b02a0e5a-ac94-54a1-88a1-38ba26e145f6', 'data_vg': 'ceph-b02a0e5a-ac94-54a1-88a1-38ba26e145f6'})  2025-05-29 00:44:05.677447 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-81bd5020-0460-5411-80bb-35101e63cce8', 'data_vg': 'ceph-81bd5020-0460-5411-80bb-35101e63cce8'})  2025-05-29 00:44:05.681317 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:44:05.682592 | orchestrator | 2025-05-29 00:44:05.682946 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-05-29 00:44:05.684456 | orchestrator | Thursday 29 May 2025 00:44:05 +0000 (0:00:00.170) 0:00:24.946 ********** 2025-05-29 00:44:05.850647 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-b02a0e5a-ac94-54a1-88a1-38ba26e145f6', 'data_vg': 'ceph-b02a0e5a-ac94-54a1-88a1-38ba26e145f6'})  2025-05-29 00:44:05.851331 | orchestrator | skipping: [testbed-node-3] => (item={'data': 'osd-block-81bd5020-0460-5411-80bb-35101e63cce8', 'data_vg': 'ceph-81bd5020-0460-5411-80bb-35101e63cce8'})  2025-05-29 00:44:05.852848 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:44:05.858133 | orchestrator | 2025-05-29 00:44:05.858841 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-05-29 00:44:05.860381 | orchestrator | Thursday 29 May 2025 00:44:05 +0000 (0:00:00.175) 0:00:25.122 ********** 2025-05-29 00:44:06.546705 | orchestrator | ok: [testbed-node-3] => { 2025-05-29 00:44:06.546946 | orchestrator |  "lvm_report": { 2025-05-29 00:44:06.551701 | orchestrator |  "lv": [ 2025-05-29 00:44:06.552340 | orchestrator |  { 2025-05-29 00:44:06.553206 | orchestrator |  "lv_name": "osd-block-81bd5020-0460-5411-80bb-35101e63cce8", 2025-05-29 00:44:06.553980 | orchestrator |  "vg_name": "ceph-81bd5020-0460-5411-80bb-35101e63cce8" 2025-05-29 00:44:06.554892 | orchestrator |  }, 2025-05-29 00:44:06.557061 | orchestrator |  { 2025-05-29 00:44:06.558300 | orchestrator |  "lv_name": "osd-block-b02a0e5a-ac94-54a1-88a1-38ba26e145f6", 2025-05-29 00:44:06.558791 | orchestrator |  "vg_name": "ceph-b02a0e5a-ac94-54a1-88a1-38ba26e145f6" 2025-05-29 00:44:06.559665 | orchestrator |  } 2025-05-29 00:44:06.561813 | orchestrator |  ], 2025-05-29 00:44:06.562385 | orchestrator |  "pv": [ 2025-05-29 00:44:06.565501 | orchestrator |  { 2025-05-29 00:44:06.566417 | orchestrator |  "pv_name": "/dev/sdb", 2025-05-29 00:44:06.566866 | orchestrator |  "vg_name": "ceph-b02a0e5a-ac94-54a1-88a1-38ba26e145f6" 2025-05-29 00:44:06.567953 | orchestrator |  }, 2025-05-29 00:44:06.570243 | orchestrator |  { 2025-05-29 00:44:06.570993 | orchestrator |  "pv_name": "/dev/sdc", 2025-05-29 00:44:06.571609 | orchestrator |  "vg_name": "ceph-81bd5020-0460-5411-80bb-35101e63cce8" 2025-05-29 00:44:06.572403 | orchestrator |  } 2025-05-29 00:44:06.572785 | orchestrator |  ] 2025-05-29 00:44:06.573750 | orchestrator |  } 2025-05-29 00:44:06.574161 | orchestrator | } 2025-05-29 00:44:06.575196 | orchestrator | 2025-05-29 00:44:06.575400 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-05-29 00:44:06.576201 | orchestrator | 2025-05-29 00:44:06.576520 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-05-29 00:44:06.578424 | orchestrator | Thursday 29 May 2025 00:44:06 +0000 (0:00:00.694) 0:00:25.817 ********** 2025-05-29 00:44:07.070559 | orchestrator | ok: [testbed-node-4 -> testbed-manager(192.168.16.5)] 2025-05-29 00:44:07.071392 | orchestrator | 2025-05-29 00:44:07.071793 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-05-29 00:44:07.073454 | orchestrator | Thursday 29 May 2025 00:44:07 +0000 (0:00:00.523) 0:00:26.341 ********** 2025-05-29 00:44:07.318424 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:44:07.321964 | orchestrator | 2025-05-29 00:44:07.322812 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-29 00:44:07.323378 | orchestrator | Thursday 29 May 2025 00:44:07 +0000 (0:00:00.246) 0:00:26.587 ********** 2025-05-29 00:44:07.794247 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop0) 2025-05-29 00:44:07.794349 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop1) 2025-05-29 00:44:07.794777 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop2) 2025-05-29 00:44:07.795397 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop3) 2025-05-29 00:44:07.797102 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop4) 2025-05-29 00:44:07.800235 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop5) 2025-05-29 00:44:07.800633 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop6) 2025-05-29 00:44:07.801280 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=loop7) 2025-05-29 00:44:07.801931 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sda) 2025-05-29 00:44:07.803932 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdb) 2025-05-29 00:44:07.807252 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdc) 2025-05-29 00:44:07.807768 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sdd) 2025-05-29 00:44:07.807969 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-4 => (item=sr0) 2025-05-29 00:44:07.808906 | orchestrator | 2025-05-29 00:44:07.809992 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-29 00:44:07.810964 | orchestrator | Thursday 29 May 2025 00:44:07 +0000 (0:00:00.477) 0:00:27.065 ********** 2025-05-29 00:44:08.001347 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:44:08.001587 | orchestrator | 2025-05-29 00:44:08.001699 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-29 00:44:08.002132 | orchestrator | Thursday 29 May 2025 00:44:07 +0000 (0:00:00.207) 0:00:27.273 ********** 2025-05-29 00:44:08.249257 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:44:08.249655 | orchestrator | 2025-05-29 00:44:08.252232 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-29 00:44:08.252801 | orchestrator | Thursday 29 May 2025 00:44:08 +0000 (0:00:00.246) 0:00:27.519 ********** 2025-05-29 00:44:08.461232 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:44:08.462296 | orchestrator | 2025-05-29 00:44:08.462371 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-29 00:44:08.462816 | orchestrator | Thursday 29 May 2025 00:44:08 +0000 (0:00:00.213) 0:00:27.732 ********** 2025-05-29 00:44:08.661318 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:44:08.661867 | orchestrator | 2025-05-29 00:44:08.662700 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-29 00:44:08.663487 | orchestrator | Thursday 29 May 2025 00:44:08 +0000 (0:00:00.199) 0:00:27.932 ********** 2025-05-29 00:44:08.859595 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:44:08.859698 | orchestrator | 2025-05-29 00:44:08.860836 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-29 00:44:08.864782 | orchestrator | Thursday 29 May 2025 00:44:08 +0000 (0:00:00.199) 0:00:28.132 ********** 2025-05-29 00:44:09.044199 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:44:09.045402 | orchestrator | 2025-05-29 00:44:09.049406 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-29 00:44:09.049939 | orchestrator | Thursday 29 May 2025 00:44:09 +0000 (0:00:00.184) 0:00:28.317 ********** 2025-05-29 00:44:09.497079 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:44:09.497423 | orchestrator | 2025-05-29 00:44:09.498586 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-29 00:44:09.499542 | orchestrator | Thursday 29 May 2025 00:44:09 +0000 (0:00:00.447) 0:00:28.765 ********** 2025-05-29 00:44:09.690424 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:44:09.694476 | orchestrator | 2025-05-29 00:44:09.694953 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-29 00:44:09.695671 | orchestrator | Thursday 29 May 2025 00:44:09 +0000 (0:00:00.197) 0:00:28.962 ********** 2025-05-29 00:44:10.086111 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_c7ad4de3-4f57-4eb1-a9f0-bec4cfb4ae61) 2025-05-29 00:44:10.087074 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_c7ad4de3-4f57-4eb1-a9f0-bec4cfb4ae61) 2025-05-29 00:44:10.090702 | orchestrator | 2025-05-29 00:44:10.091538 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-29 00:44:10.092051 | orchestrator | Thursday 29 May 2025 00:44:10 +0000 (0:00:00.396) 0:00:29.358 ********** 2025-05-29 00:44:10.550716 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_d4d6d7dc-ffab-40f4-8a14-6defed4afc9f) 2025-05-29 00:44:10.551062 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_d4d6d7dc-ffab-40f4-8a14-6defed4afc9f) 2025-05-29 00:44:10.551577 | orchestrator | 2025-05-29 00:44:10.553147 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-29 00:44:10.553243 | orchestrator | Thursday 29 May 2025 00:44:10 +0000 (0:00:00.463) 0:00:29.822 ********** 2025-05-29 00:44:10.947638 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_ab52b3eb-0fd7-41fe-9d4d-bdc516081274) 2025-05-29 00:44:10.948075 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_ab52b3eb-0fd7-41fe-9d4d-bdc516081274) 2025-05-29 00:44:10.948984 | orchestrator | 2025-05-29 00:44:10.949799 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-29 00:44:10.950445 | orchestrator | Thursday 29 May 2025 00:44:10 +0000 (0:00:00.398) 0:00:30.220 ********** 2025-05-29 00:44:11.344058 | orchestrator | ok: [testbed-node-4] => (item=scsi-0QEMU_QEMU_HARDDISK_f7dbb189-5858-4eca-9499-fceb9ae8f8d2) 2025-05-29 00:44:11.344857 | orchestrator | ok: [testbed-node-4] => (item=scsi-SQEMU_QEMU_HARDDISK_f7dbb189-5858-4eca-9499-fceb9ae8f8d2) 2025-05-29 00:44:11.345245 | orchestrator | 2025-05-29 00:44:11.348789 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-29 00:44:11.348870 | orchestrator | Thursday 29 May 2025 00:44:11 +0000 (0:00:00.395) 0:00:30.616 ********** 2025-05-29 00:44:11.639703 | orchestrator | ok: [testbed-node-4] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-05-29 00:44:11.640555 | orchestrator | 2025-05-29 00:44:11.642792 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-29 00:44:11.642822 | orchestrator | Thursday 29 May 2025 00:44:11 +0000 (0:00:00.295) 0:00:30.911 ********** 2025-05-29 00:44:12.076382 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop0) 2025-05-29 00:44:12.076524 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop1) 2025-05-29 00:44:12.076817 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop2) 2025-05-29 00:44:12.077535 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop3) 2025-05-29 00:44:12.077899 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop4) 2025-05-29 00:44:12.078475 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop5) 2025-05-29 00:44:12.078772 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop6) 2025-05-29 00:44:12.079514 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=loop7) 2025-05-29 00:44:12.079832 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sda) 2025-05-29 00:44:12.080360 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdb) 2025-05-29 00:44:12.080745 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdc) 2025-05-29 00:44:12.081024 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sdd) 2025-05-29 00:44:12.082676 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-4 => (item=sr0) 2025-05-29 00:44:12.082697 | orchestrator | 2025-05-29 00:44:12.082710 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-29 00:44:12.082721 | orchestrator | Thursday 29 May 2025 00:44:12 +0000 (0:00:00.437) 0:00:31.349 ********** 2025-05-29 00:44:12.252189 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:44:12.255347 | orchestrator | 2025-05-29 00:44:12.256140 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-29 00:44:12.257259 | orchestrator | Thursday 29 May 2025 00:44:12 +0000 (0:00:00.174) 0:00:31.523 ********** 2025-05-29 00:44:12.554763 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:44:12.554975 | orchestrator | 2025-05-29 00:44:12.558759 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-29 00:44:12.559330 | orchestrator | Thursday 29 May 2025 00:44:12 +0000 (0:00:00.302) 0:00:31.826 ********** 2025-05-29 00:44:12.753125 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:44:12.756004 | orchestrator | 2025-05-29 00:44:12.756412 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-29 00:44:12.756546 | orchestrator | Thursday 29 May 2025 00:44:12 +0000 (0:00:00.198) 0:00:32.024 ********** 2025-05-29 00:44:12.927281 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:44:12.928126 | orchestrator | 2025-05-29 00:44:12.932691 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-29 00:44:12.933032 | orchestrator | Thursday 29 May 2025 00:44:12 +0000 (0:00:00.175) 0:00:32.200 ********** 2025-05-29 00:44:13.109086 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:44:13.109248 | orchestrator | 2025-05-29 00:44:13.113398 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-29 00:44:13.113475 | orchestrator | Thursday 29 May 2025 00:44:13 +0000 (0:00:00.181) 0:00:32.381 ********** 2025-05-29 00:44:13.319945 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:44:13.320923 | orchestrator | 2025-05-29 00:44:13.322323 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-29 00:44:13.323005 | orchestrator | Thursday 29 May 2025 00:44:13 +0000 (0:00:00.210) 0:00:32.591 ********** 2025-05-29 00:44:13.505505 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:44:13.506238 | orchestrator | 2025-05-29 00:44:13.509212 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-29 00:44:13.510333 | orchestrator | Thursday 29 May 2025 00:44:13 +0000 (0:00:00.186) 0:00:32.777 ********** 2025-05-29 00:44:13.700909 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:44:13.701086 | orchestrator | 2025-05-29 00:44:13.701853 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-29 00:44:13.702320 | orchestrator | Thursday 29 May 2025 00:44:13 +0000 (0:00:00.193) 0:00:32.971 ********** 2025-05-29 00:44:14.317108 | orchestrator | ok: [testbed-node-4] => (item=sda1) 2025-05-29 00:44:14.318123 | orchestrator | ok: [testbed-node-4] => (item=sda14) 2025-05-29 00:44:14.322630 | orchestrator | ok: [testbed-node-4] => (item=sda15) 2025-05-29 00:44:14.323072 | orchestrator | ok: [testbed-node-4] => (item=sda16) 2025-05-29 00:44:14.323627 | orchestrator | 2025-05-29 00:44:14.324276 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-29 00:44:14.327201 | orchestrator | Thursday 29 May 2025 00:44:14 +0000 (0:00:00.615) 0:00:33.587 ********** 2025-05-29 00:44:14.536264 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:44:14.536367 | orchestrator | 2025-05-29 00:44:14.536747 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-29 00:44:14.541096 | orchestrator | Thursday 29 May 2025 00:44:14 +0000 (0:00:00.219) 0:00:33.807 ********** 2025-05-29 00:44:14.746401 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:44:14.747197 | orchestrator | 2025-05-29 00:44:14.747665 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-29 00:44:14.748231 | orchestrator | Thursday 29 May 2025 00:44:14 +0000 (0:00:00.210) 0:00:34.017 ********** 2025-05-29 00:44:14.949765 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:44:14.950617 | orchestrator | 2025-05-29 00:44:14.951343 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-29 00:44:14.954137 | orchestrator | Thursday 29 May 2025 00:44:14 +0000 (0:00:00.202) 0:00:34.220 ********** 2025-05-29 00:44:15.581422 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:44:15.581581 | orchestrator | 2025-05-29 00:44:15.584246 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-05-29 00:44:15.584987 | orchestrator | Thursday 29 May 2025 00:44:15 +0000 (0:00:00.631) 0:00:34.851 ********** 2025-05-29 00:44:15.738569 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:44:15.739912 | orchestrator | 2025-05-29 00:44:15.740976 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-05-29 00:44:15.741070 | orchestrator | Thursday 29 May 2025 00:44:15 +0000 (0:00:00.159) 0:00:35.010 ********** 2025-05-29 00:44:15.970516 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': '2961dba5-5d3e-5262-aab3-a8717ef28b96'}}) 2025-05-29 00:44:15.970612 | orchestrator | ok: [testbed-node-4] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '10c8172d-d6a1-5b27-956e-8c5bc818fcb1'}}) 2025-05-29 00:44:15.972412 | orchestrator | 2025-05-29 00:44:15.974922 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-05-29 00:44:15.975348 | orchestrator | Thursday 29 May 2025 00:44:15 +0000 (0:00:00.231) 0:00:35.241 ********** 2025-05-29 00:44:17.839774 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-2961dba5-5d3e-5262-aab3-a8717ef28b96', 'data_vg': 'ceph-2961dba5-5d3e-5262-aab3-a8717ef28b96'}) 2025-05-29 00:44:17.839871 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-10c8172d-d6a1-5b27-956e-8c5bc818fcb1', 'data_vg': 'ceph-10c8172d-d6a1-5b27-956e-8c5bc818fcb1'}) 2025-05-29 00:44:17.839883 | orchestrator | 2025-05-29 00:44:17.839895 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-05-29 00:44:17.840011 | orchestrator | Thursday 29 May 2025 00:44:17 +0000 (0:00:01.868) 0:00:37.109 ********** 2025-05-29 00:44:18.023456 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2961dba5-5d3e-5262-aab3-a8717ef28b96', 'data_vg': 'ceph-2961dba5-5d3e-5262-aab3-a8717ef28b96'})  2025-05-29 00:44:18.023620 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-10c8172d-d6a1-5b27-956e-8c5bc818fcb1', 'data_vg': 'ceph-10c8172d-d6a1-5b27-956e-8c5bc818fcb1'})  2025-05-29 00:44:18.023639 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:44:18.023956 | orchestrator | 2025-05-29 00:44:18.024351 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-05-29 00:44:18.024646 | orchestrator | Thursday 29 May 2025 00:44:18 +0000 (0:00:00.186) 0:00:37.295 ********** 2025-05-29 00:44:19.313632 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-2961dba5-5d3e-5262-aab3-a8717ef28b96', 'data_vg': 'ceph-2961dba5-5d3e-5262-aab3-a8717ef28b96'}) 2025-05-29 00:44:19.313810 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-10c8172d-d6a1-5b27-956e-8c5bc818fcb1', 'data_vg': 'ceph-10c8172d-d6a1-5b27-956e-8c5bc818fcb1'}) 2025-05-29 00:44:19.314319 | orchestrator | 2025-05-29 00:44:19.315517 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-05-29 00:44:19.316280 | orchestrator | Thursday 29 May 2025 00:44:19 +0000 (0:00:01.288) 0:00:38.584 ********** 2025-05-29 00:44:19.502455 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2961dba5-5d3e-5262-aab3-a8717ef28b96', 'data_vg': 'ceph-2961dba5-5d3e-5262-aab3-a8717ef28b96'})  2025-05-29 00:44:19.502556 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-10c8172d-d6a1-5b27-956e-8c5bc818fcb1', 'data_vg': 'ceph-10c8172d-d6a1-5b27-956e-8c5bc818fcb1'})  2025-05-29 00:44:19.502570 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:44:19.502583 | orchestrator | 2025-05-29 00:44:19.502595 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-05-29 00:44:19.502608 | orchestrator | Thursday 29 May 2025 00:44:19 +0000 (0:00:00.188) 0:00:38.772 ********** 2025-05-29 00:44:19.648568 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:44:19.649094 | orchestrator | 2025-05-29 00:44:19.649501 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-05-29 00:44:19.650078 | orchestrator | Thursday 29 May 2025 00:44:19 +0000 (0:00:00.147) 0:00:38.919 ********** 2025-05-29 00:44:19.809614 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2961dba5-5d3e-5262-aab3-a8717ef28b96', 'data_vg': 'ceph-2961dba5-5d3e-5262-aab3-a8717ef28b96'})  2025-05-29 00:44:19.810199 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-10c8172d-d6a1-5b27-956e-8c5bc818fcb1', 'data_vg': 'ceph-10c8172d-d6a1-5b27-956e-8c5bc818fcb1'})  2025-05-29 00:44:19.810878 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:44:19.812076 | orchestrator | 2025-05-29 00:44:19.812492 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-05-29 00:44:19.815193 | orchestrator | Thursday 29 May 2025 00:44:19 +0000 (0:00:00.162) 0:00:39.081 ********** 2025-05-29 00:44:20.135137 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:44:20.136052 | orchestrator | 2025-05-29 00:44:20.137582 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-05-29 00:44:20.138200 | orchestrator | Thursday 29 May 2025 00:44:20 +0000 (0:00:00.322) 0:00:39.404 ********** 2025-05-29 00:44:20.334725 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2961dba5-5d3e-5262-aab3-a8717ef28b96', 'data_vg': 'ceph-2961dba5-5d3e-5262-aab3-a8717ef28b96'})  2025-05-29 00:44:20.334882 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-10c8172d-d6a1-5b27-956e-8c5bc818fcb1', 'data_vg': 'ceph-10c8172d-d6a1-5b27-956e-8c5bc818fcb1'})  2025-05-29 00:44:20.336116 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:44:20.337334 | orchestrator | 2025-05-29 00:44:20.337934 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-05-29 00:44:20.339017 | orchestrator | Thursday 29 May 2025 00:44:20 +0000 (0:00:00.201) 0:00:39.606 ********** 2025-05-29 00:44:20.481938 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:44:20.482228 | orchestrator | 2025-05-29 00:44:20.483324 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-05-29 00:44:20.484387 | orchestrator | Thursday 29 May 2025 00:44:20 +0000 (0:00:00.147) 0:00:39.753 ********** 2025-05-29 00:44:20.640386 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2961dba5-5d3e-5262-aab3-a8717ef28b96', 'data_vg': 'ceph-2961dba5-5d3e-5262-aab3-a8717ef28b96'})  2025-05-29 00:44:20.640484 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-10c8172d-d6a1-5b27-956e-8c5bc818fcb1', 'data_vg': 'ceph-10c8172d-d6a1-5b27-956e-8c5bc818fcb1'})  2025-05-29 00:44:20.641788 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:44:20.642580 | orchestrator | 2025-05-29 00:44:20.643671 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-05-29 00:44:20.644587 | orchestrator | Thursday 29 May 2025 00:44:20 +0000 (0:00:00.158) 0:00:39.911 ********** 2025-05-29 00:44:20.792815 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:44:20.796343 | orchestrator | 2025-05-29 00:44:20.796782 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-05-29 00:44:20.797266 | orchestrator | Thursday 29 May 2025 00:44:20 +0000 (0:00:00.151) 0:00:40.062 ********** 2025-05-29 00:44:20.984866 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2961dba5-5d3e-5262-aab3-a8717ef28b96', 'data_vg': 'ceph-2961dba5-5d3e-5262-aab3-a8717ef28b96'})  2025-05-29 00:44:20.985659 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-10c8172d-d6a1-5b27-956e-8c5bc818fcb1', 'data_vg': 'ceph-10c8172d-d6a1-5b27-956e-8c5bc818fcb1'})  2025-05-29 00:44:20.986135 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:44:20.986722 | orchestrator | 2025-05-29 00:44:20.987601 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-05-29 00:44:20.987773 | orchestrator | Thursday 29 May 2025 00:44:20 +0000 (0:00:00.192) 0:00:40.255 ********** 2025-05-29 00:44:21.157919 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2961dba5-5d3e-5262-aab3-a8717ef28b96', 'data_vg': 'ceph-2961dba5-5d3e-5262-aab3-a8717ef28b96'})  2025-05-29 00:44:21.158816 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-10c8172d-d6a1-5b27-956e-8c5bc818fcb1', 'data_vg': 'ceph-10c8172d-d6a1-5b27-956e-8c5bc818fcb1'})  2025-05-29 00:44:21.159475 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:44:21.160278 | orchestrator | 2025-05-29 00:44:21.160807 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-05-29 00:44:21.161629 | orchestrator | Thursday 29 May 2025 00:44:21 +0000 (0:00:00.173) 0:00:40.429 ********** 2025-05-29 00:44:21.352534 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2961dba5-5d3e-5262-aab3-a8717ef28b96', 'data_vg': 'ceph-2961dba5-5d3e-5262-aab3-a8717ef28b96'})  2025-05-29 00:44:21.352635 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-10c8172d-d6a1-5b27-956e-8c5bc818fcb1', 'data_vg': 'ceph-10c8172d-d6a1-5b27-956e-8c5bc818fcb1'})  2025-05-29 00:44:21.353431 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:44:21.355808 | orchestrator | 2025-05-29 00:44:21.355854 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-05-29 00:44:21.355867 | orchestrator | Thursday 29 May 2025 00:44:21 +0000 (0:00:00.193) 0:00:40.622 ********** 2025-05-29 00:44:21.518386 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:44:21.521073 | orchestrator | 2025-05-29 00:44:21.521776 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-05-29 00:44:21.522862 | orchestrator | Thursday 29 May 2025 00:44:21 +0000 (0:00:00.167) 0:00:40.789 ********** 2025-05-29 00:44:21.692842 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:44:21.692942 | orchestrator | 2025-05-29 00:44:21.693029 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-05-29 00:44:21.693544 | orchestrator | Thursday 29 May 2025 00:44:21 +0000 (0:00:00.175) 0:00:40.965 ********** 2025-05-29 00:44:21.829316 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:44:21.829396 | orchestrator | 2025-05-29 00:44:21.830170 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-05-29 00:44:21.830498 | orchestrator | Thursday 29 May 2025 00:44:21 +0000 (0:00:00.130) 0:00:41.095 ********** 2025-05-29 00:44:22.297180 | orchestrator | ok: [testbed-node-4] => { 2025-05-29 00:44:22.297311 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-05-29 00:44:22.298469 | orchestrator | } 2025-05-29 00:44:22.299477 | orchestrator | 2025-05-29 00:44:22.300963 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-05-29 00:44:22.302495 | orchestrator | Thursday 29 May 2025 00:44:22 +0000 (0:00:00.470) 0:00:41.565 ********** 2025-05-29 00:44:22.467875 | orchestrator | ok: [testbed-node-4] => { 2025-05-29 00:44:22.468680 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-05-29 00:44:22.469769 | orchestrator | } 2025-05-29 00:44:22.470804 | orchestrator | 2025-05-29 00:44:22.471729 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-05-29 00:44:22.473002 | orchestrator | Thursday 29 May 2025 00:44:22 +0000 (0:00:00.173) 0:00:41.738 ********** 2025-05-29 00:44:22.610431 | orchestrator | ok: [testbed-node-4] => { 2025-05-29 00:44:22.610557 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-05-29 00:44:22.611421 | orchestrator | } 2025-05-29 00:44:22.613645 | orchestrator | 2025-05-29 00:44:22.613673 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-05-29 00:44:22.613686 | orchestrator | Thursday 29 May 2025 00:44:22 +0000 (0:00:00.141) 0:00:41.880 ********** 2025-05-29 00:44:23.163709 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:44:23.163870 | orchestrator | 2025-05-29 00:44:23.164398 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-05-29 00:44:23.165314 | orchestrator | Thursday 29 May 2025 00:44:23 +0000 (0:00:00.553) 0:00:42.434 ********** 2025-05-29 00:44:23.672116 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:44:23.672267 | orchestrator | 2025-05-29 00:44:23.672285 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-05-29 00:44:23.672366 | orchestrator | Thursday 29 May 2025 00:44:23 +0000 (0:00:00.510) 0:00:42.944 ********** 2025-05-29 00:44:24.213035 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:44:24.213254 | orchestrator | 2025-05-29 00:44:24.215126 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-05-29 00:44:24.215205 | orchestrator | Thursday 29 May 2025 00:44:24 +0000 (0:00:00.537) 0:00:43.482 ********** 2025-05-29 00:44:24.385120 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:44:24.385330 | orchestrator | 2025-05-29 00:44:24.386087 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-05-29 00:44:24.388221 | orchestrator | Thursday 29 May 2025 00:44:24 +0000 (0:00:00.172) 0:00:43.655 ********** 2025-05-29 00:44:24.505247 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:44:24.505386 | orchestrator | 2025-05-29 00:44:24.505716 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-05-29 00:44:24.506655 | orchestrator | Thursday 29 May 2025 00:44:24 +0000 (0:00:00.120) 0:00:43.775 ********** 2025-05-29 00:44:24.618871 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:44:24.621243 | orchestrator | 2025-05-29 00:44:24.621275 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-05-29 00:44:24.622340 | orchestrator | Thursday 29 May 2025 00:44:24 +0000 (0:00:00.113) 0:00:43.889 ********** 2025-05-29 00:44:24.761763 | orchestrator | ok: [testbed-node-4] => { 2025-05-29 00:44:24.763653 | orchestrator |  "vgs_report": { 2025-05-29 00:44:24.764621 | orchestrator |  "vg": [] 2025-05-29 00:44:24.765763 | orchestrator |  } 2025-05-29 00:44:24.767040 | orchestrator | } 2025-05-29 00:44:24.767638 | orchestrator | 2025-05-29 00:44:24.768306 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-05-29 00:44:24.768970 | orchestrator | Thursday 29 May 2025 00:44:24 +0000 (0:00:00.143) 0:00:44.032 ********** 2025-05-29 00:44:24.899339 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:44:24.899771 | orchestrator | 2025-05-29 00:44:24.900125 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-05-29 00:44:24.900793 | orchestrator | Thursday 29 May 2025 00:44:24 +0000 (0:00:00.137) 0:00:44.170 ********** 2025-05-29 00:44:25.216797 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:44:25.216954 | orchestrator | 2025-05-29 00:44:25.217834 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-05-29 00:44:25.218595 | orchestrator | Thursday 29 May 2025 00:44:25 +0000 (0:00:00.316) 0:00:44.487 ********** 2025-05-29 00:44:25.362336 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:44:25.362441 | orchestrator | 2025-05-29 00:44:25.363122 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-05-29 00:44:25.363866 | orchestrator | Thursday 29 May 2025 00:44:25 +0000 (0:00:00.145) 0:00:44.633 ********** 2025-05-29 00:44:25.508399 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:44:25.508495 | orchestrator | 2025-05-29 00:44:25.509396 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-05-29 00:44:25.509656 | orchestrator | Thursday 29 May 2025 00:44:25 +0000 (0:00:00.146) 0:00:44.780 ********** 2025-05-29 00:44:25.651193 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:44:25.652346 | orchestrator | 2025-05-29 00:44:25.653193 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-05-29 00:44:25.654180 | orchestrator | Thursday 29 May 2025 00:44:25 +0000 (0:00:00.142) 0:00:44.922 ********** 2025-05-29 00:44:25.788104 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:44:25.788580 | orchestrator | 2025-05-29 00:44:25.789068 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-05-29 00:44:25.789964 | orchestrator | Thursday 29 May 2025 00:44:25 +0000 (0:00:00.136) 0:00:45.059 ********** 2025-05-29 00:44:25.926229 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:44:25.926962 | orchestrator | 2025-05-29 00:44:25.927190 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-05-29 00:44:25.927949 | orchestrator | Thursday 29 May 2025 00:44:25 +0000 (0:00:00.138) 0:00:45.198 ********** 2025-05-29 00:44:26.066088 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:44:26.067044 | orchestrator | 2025-05-29 00:44:26.067627 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-05-29 00:44:26.068561 | orchestrator | Thursday 29 May 2025 00:44:26 +0000 (0:00:00.138) 0:00:45.336 ********** 2025-05-29 00:44:26.206083 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:44:26.206381 | orchestrator | 2025-05-29 00:44:26.207268 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-05-29 00:44:26.208177 | orchestrator | Thursday 29 May 2025 00:44:26 +0000 (0:00:00.141) 0:00:45.477 ********** 2025-05-29 00:44:26.346730 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:44:26.348623 | orchestrator | 2025-05-29 00:44:26.349340 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-05-29 00:44:26.349575 | orchestrator | Thursday 29 May 2025 00:44:26 +0000 (0:00:00.140) 0:00:45.618 ********** 2025-05-29 00:44:26.488434 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:44:26.490066 | orchestrator | 2025-05-29 00:44:26.490791 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-05-29 00:44:26.491787 | orchestrator | Thursday 29 May 2025 00:44:26 +0000 (0:00:00.141) 0:00:45.760 ********** 2025-05-29 00:44:26.625198 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:44:26.625511 | orchestrator | 2025-05-29 00:44:26.626354 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-05-29 00:44:26.626910 | orchestrator | Thursday 29 May 2025 00:44:26 +0000 (0:00:00.135) 0:00:45.895 ********** 2025-05-29 00:44:26.761213 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:44:26.761441 | orchestrator | 2025-05-29 00:44:26.761616 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-05-29 00:44:26.762175 | orchestrator | Thursday 29 May 2025 00:44:26 +0000 (0:00:00.138) 0:00:46.033 ********** 2025-05-29 00:44:27.186341 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:44:27.187187 | orchestrator | 2025-05-29 00:44:27.190471 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-05-29 00:44:27.190513 | orchestrator | Thursday 29 May 2025 00:44:27 +0000 (0:00:00.421) 0:00:46.454 ********** 2025-05-29 00:44:27.360891 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2961dba5-5d3e-5262-aab3-a8717ef28b96', 'data_vg': 'ceph-2961dba5-5d3e-5262-aab3-a8717ef28b96'})  2025-05-29 00:44:27.361268 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-10c8172d-d6a1-5b27-956e-8c5bc818fcb1', 'data_vg': 'ceph-10c8172d-d6a1-5b27-956e-8c5bc818fcb1'})  2025-05-29 00:44:27.362302 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:44:27.364851 | orchestrator | 2025-05-29 00:44:27.364877 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-05-29 00:44:27.365993 | orchestrator | Thursday 29 May 2025 00:44:27 +0000 (0:00:00.177) 0:00:46.631 ********** 2025-05-29 00:44:27.535802 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2961dba5-5d3e-5262-aab3-a8717ef28b96', 'data_vg': 'ceph-2961dba5-5d3e-5262-aab3-a8717ef28b96'})  2025-05-29 00:44:27.535970 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-10c8172d-d6a1-5b27-956e-8c5bc818fcb1', 'data_vg': 'ceph-10c8172d-d6a1-5b27-956e-8c5bc818fcb1'})  2025-05-29 00:44:27.536417 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:44:27.537076 | orchestrator | 2025-05-29 00:44:27.537720 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-05-29 00:44:27.538763 | orchestrator | Thursday 29 May 2025 00:44:27 +0000 (0:00:00.175) 0:00:46.807 ********** 2025-05-29 00:44:27.714881 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2961dba5-5d3e-5262-aab3-a8717ef28b96', 'data_vg': 'ceph-2961dba5-5d3e-5262-aab3-a8717ef28b96'})  2025-05-29 00:44:27.716497 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-10c8172d-d6a1-5b27-956e-8c5bc818fcb1', 'data_vg': 'ceph-10c8172d-d6a1-5b27-956e-8c5bc818fcb1'})  2025-05-29 00:44:27.719093 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:44:27.719247 | orchestrator | 2025-05-29 00:44:27.720946 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-05-29 00:44:27.721115 | orchestrator | Thursday 29 May 2025 00:44:27 +0000 (0:00:00.179) 0:00:46.986 ********** 2025-05-29 00:44:27.892866 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2961dba5-5d3e-5262-aab3-a8717ef28b96', 'data_vg': 'ceph-2961dba5-5d3e-5262-aab3-a8717ef28b96'})  2025-05-29 00:44:27.894916 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-10c8172d-d6a1-5b27-956e-8c5bc818fcb1', 'data_vg': 'ceph-10c8172d-d6a1-5b27-956e-8c5bc818fcb1'})  2025-05-29 00:44:27.895887 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:44:27.896869 | orchestrator | 2025-05-29 00:44:27.897554 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-05-29 00:44:27.898345 | orchestrator | Thursday 29 May 2025 00:44:27 +0000 (0:00:00.178) 0:00:47.164 ********** 2025-05-29 00:44:28.079655 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2961dba5-5d3e-5262-aab3-a8717ef28b96', 'data_vg': 'ceph-2961dba5-5d3e-5262-aab3-a8717ef28b96'})  2025-05-29 00:44:28.080763 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-10c8172d-d6a1-5b27-956e-8c5bc818fcb1', 'data_vg': 'ceph-10c8172d-d6a1-5b27-956e-8c5bc818fcb1'})  2025-05-29 00:44:28.081936 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:44:28.082818 | orchestrator | 2025-05-29 00:44:28.083327 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-05-29 00:44:28.083702 | orchestrator | Thursday 29 May 2025 00:44:28 +0000 (0:00:00.184) 0:00:47.348 ********** 2025-05-29 00:44:28.268877 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2961dba5-5d3e-5262-aab3-a8717ef28b96', 'data_vg': 'ceph-2961dba5-5d3e-5262-aab3-a8717ef28b96'})  2025-05-29 00:44:28.269381 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-10c8172d-d6a1-5b27-956e-8c5bc818fcb1', 'data_vg': 'ceph-10c8172d-d6a1-5b27-956e-8c5bc818fcb1'})  2025-05-29 00:44:28.269841 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:44:28.270602 | orchestrator | 2025-05-29 00:44:28.273513 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-05-29 00:44:28.274243 | orchestrator | Thursday 29 May 2025 00:44:28 +0000 (0:00:00.190) 0:00:47.539 ********** 2025-05-29 00:44:28.440388 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2961dba5-5d3e-5262-aab3-a8717ef28b96', 'data_vg': 'ceph-2961dba5-5d3e-5262-aab3-a8717ef28b96'})  2025-05-29 00:44:28.440607 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-10c8172d-d6a1-5b27-956e-8c5bc818fcb1', 'data_vg': 'ceph-10c8172d-d6a1-5b27-956e-8c5bc818fcb1'})  2025-05-29 00:44:28.441322 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:44:28.442012 | orchestrator | 2025-05-29 00:44:28.442773 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-05-29 00:44:28.442800 | orchestrator | Thursday 29 May 2025 00:44:28 +0000 (0:00:00.172) 0:00:47.711 ********** 2025-05-29 00:44:28.593793 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2961dba5-5d3e-5262-aab3-a8717ef28b96', 'data_vg': 'ceph-2961dba5-5d3e-5262-aab3-a8717ef28b96'})  2025-05-29 00:44:28.593966 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-10c8172d-d6a1-5b27-956e-8c5bc818fcb1', 'data_vg': 'ceph-10c8172d-d6a1-5b27-956e-8c5bc818fcb1'})  2025-05-29 00:44:28.595337 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:44:28.597484 | orchestrator | 2025-05-29 00:44:28.597512 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-05-29 00:44:28.597587 | orchestrator | Thursday 29 May 2025 00:44:28 +0000 (0:00:00.152) 0:00:47.864 ********** 2025-05-29 00:44:29.119308 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:44:29.120231 | orchestrator | 2025-05-29 00:44:29.121397 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-05-29 00:44:29.122065 | orchestrator | Thursday 29 May 2025 00:44:29 +0000 (0:00:00.524) 0:00:48.389 ********** 2025-05-29 00:44:29.671697 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:44:29.671842 | orchestrator | 2025-05-29 00:44:29.672731 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-05-29 00:44:29.673105 | orchestrator | Thursday 29 May 2025 00:44:29 +0000 (0:00:00.554) 0:00:48.943 ********** 2025-05-29 00:44:29.992570 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:44:29.995779 | orchestrator | 2025-05-29 00:44:29.995810 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-05-29 00:44:29.995823 | orchestrator | Thursday 29 May 2025 00:44:29 +0000 (0:00:00.318) 0:00:49.262 ********** 2025-05-29 00:44:30.182283 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-10c8172d-d6a1-5b27-956e-8c5bc818fcb1', 'vg_name': 'ceph-10c8172d-d6a1-5b27-956e-8c5bc818fcb1'}) 2025-05-29 00:44:30.183014 | orchestrator | ok: [testbed-node-4] => (item={'lv_name': 'osd-block-2961dba5-5d3e-5262-aab3-a8717ef28b96', 'vg_name': 'ceph-2961dba5-5d3e-5262-aab3-a8717ef28b96'}) 2025-05-29 00:44:30.183959 | orchestrator | 2025-05-29 00:44:30.184266 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-05-29 00:44:30.185203 | orchestrator | Thursday 29 May 2025 00:44:30 +0000 (0:00:00.184) 0:00:49.447 ********** 2025-05-29 00:44:30.357466 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2961dba5-5d3e-5262-aab3-a8717ef28b96', 'data_vg': 'ceph-2961dba5-5d3e-5262-aab3-a8717ef28b96'})  2025-05-29 00:44:30.357849 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-10c8172d-d6a1-5b27-956e-8c5bc818fcb1', 'data_vg': 'ceph-10c8172d-d6a1-5b27-956e-8c5bc818fcb1'})  2025-05-29 00:44:30.358748 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:44:30.361922 | orchestrator | 2025-05-29 00:44:30.361945 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-05-29 00:44:30.361959 | orchestrator | Thursday 29 May 2025 00:44:30 +0000 (0:00:00.181) 0:00:49.628 ********** 2025-05-29 00:44:30.524559 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2961dba5-5d3e-5262-aab3-a8717ef28b96', 'data_vg': 'ceph-2961dba5-5d3e-5262-aab3-a8717ef28b96'})  2025-05-29 00:44:30.525295 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-10c8172d-d6a1-5b27-956e-8c5bc818fcb1', 'data_vg': 'ceph-10c8172d-d6a1-5b27-956e-8c5bc818fcb1'})  2025-05-29 00:44:30.528066 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:44:30.529059 | orchestrator | 2025-05-29 00:44:30.529565 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-05-29 00:44:30.529599 | orchestrator | Thursday 29 May 2025 00:44:30 +0000 (0:00:00.166) 0:00:49.795 ********** 2025-05-29 00:44:30.693986 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-2961dba5-5d3e-5262-aab3-a8717ef28b96', 'data_vg': 'ceph-2961dba5-5d3e-5262-aab3-a8717ef28b96'})  2025-05-29 00:44:30.695383 | orchestrator | skipping: [testbed-node-4] => (item={'data': 'osd-block-10c8172d-d6a1-5b27-956e-8c5bc818fcb1', 'data_vg': 'ceph-10c8172d-d6a1-5b27-956e-8c5bc818fcb1'})  2025-05-29 00:44:30.696061 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:44:30.697226 | orchestrator | 2025-05-29 00:44:30.698663 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-05-29 00:44:30.698986 | orchestrator | Thursday 29 May 2025 00:44:30 +0000 (0:00:00.170) 0:00:49.965 ********** 2025-05-29 00:44:31.545475 | orchestrator | ok: [testbed-node-4] => { 2025-05-29 00:44:31.548820 | orchestrator |  "lvm_report": { 2025-05-29 00:44:31.548898 | orchestrator |  "lv": [ 2025-05-29 00:44:31.549915 | orchestrator |  { 2025-05-29 00:44:31.550978 | orchestrator |  "lv_name": "osd-block-10c8172d-d6a1-5b27-956e-8c5bc818fcb1", 2025-05-29 00:44:31.551620 | orchestrator |  "vg_name": "ceph-10c8172d-d6a1-5b27-956e-8c5bc818fcb1" 2025-05-29 00:44:31.552641 | orchestrator |  }, 2025-05-29 00:44:31.553498 | orchestrator |  { 2025-05-29 00:44:31.553713 | orchestrator |  "lv_name": "osd-block-2961dba5-5d3e-5262-aab3-a8717ef28b96", 2025-05-29 00:44:31.554751 | orchestrator |  "vg_name": "ceph-2961dba5-5d3e-5262-aab3-a8717ef28b96" 2025-05-29 00:44:31.555221 | orchestrator |  } 2025-05-29 00:44:31.555778 | orchestrator |  ], 2025-05-29 00:44:31.556420 | orchestrator |  "pv": [ 2025-05-29 00:44:31.556997 | orchestrator |  { 2025-05-29 00:44:31.557533 | orchestrator |  "pv_name": "/dev/sdb", 2025-05-29 00:44:31.557974 | orchestrator |  "vg_name": "ceph-2961dba5-5d3e-5262-aab3-a8717ef28b96" 2025-05-29 00:44:31.558684 | orchestrator |  }, 2025-05-29 00:44:31.558997 | orchestrator |  { 2025-05-29 00:44:31.559402 | orchestrator |  "pv_name": "/dev/sdc", 2025-05-29 00:44:31.559759 | orchestrator |  "vg_name": "ceph-10c8172d-d6a1-5b27-956e-8c5bc818fcb1" 2025-05-29 00:44:31.560024 | orchestrator |  } 2025-05-29 00:44:31.560513 | orchestrator |  ] 2025-05-29 00:44:31.560690 | orchestrator |  } 2025-05-29 00:44:31.561006 | orchestrator | } 2025-05-29 00:44:31.561384 | orchestrator | 2025-05-29 00:44:31.561727 | orchestrator | PLAY [Ceph create LVM devices] ************************************************* 2025-05-29 00:44:31.562062 | orchestrator | 2025-05-29 00:44:31.562538 | orchestrator | TASK [Get extra vars for Ceph configuration] *********************************** 2025-05-29 00:44:31.562869 | orchestrator | Thursday 29 May 2025 00:44:31 +0000 (0:00:00.850) 0:00:50.815 ********** 2025-05-29 00:44:31.797207 | orchestrator | ok: [testbed-node-5 -> testbed-manager(192.168.16.5)] 2025-05-29 00:44:31.797313 | orchestrator | 2025-05-29 00:44:31.797340 | orchestrator | TASK [Get initial list of available block devices] ***************************** 2025-05-29 00:44:31.797443 | orchestrator | Thursday 29 May 2025 00:44:31 +0000 (0:00:00.250) 0:00:51.066 ********** 2025-05-29 00:44:32.083984 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:44:32.084774 | orchestrator | 2025-05-29 00:44:32.084824 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-29 00:44:32.085788 | orchestrator | Thursday 29 May 2025 00:44:32 +0000 (0:00:00.288) 0:00:51.355 ********** 2025-05-29 00:44:32.569266 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop0) 2025-05-29 00:44:32.569629 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop1) 2025-05-29 00:44:32.571578 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop2) 2025-05-29 00:44:32.571859 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop3) 2025-05-29 00:44:32.572706 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop4) 2025-05-29 00:44:32.573738 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop5) 2025-05-29 00:44:32.575260 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop6) 2025-05-29 00:44:32.576470 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=loop7) 2025-05-29 00:44:32.576866 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sda) 2025-05-29 00:44:32.577763 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdb) 2025-05-29 00:44:32.578179 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdc) 2025-05-29 00:44:32.578869 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sdd) 2025-05-29 00:44:32.579516 | orchestrator | included: /ansible/tasks/_add-device-links.yml for testbed-node-5 => (item=sr0) 2025-05-29 00:44:32.579834 | orchestrator | 2025-05-29 00:44:32.580384 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-29 00:44:32.580670 | orchestrator | Thursday 29 May 2025 00:44:32 +0000 (0:00:00.484) 0:00:51.839 ********** 2025-05-29 00:44:32.778360 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:44:32.778455 | orchestrator | 2025-05-29 00:44:32.779377 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-29 00:44:32.779950 | orchestrator | Thursday 29 May 2025 00:44:32 +0000 (0:00:00.209) 0:00:52.049 ********** 2025-05-29 00:44:32.985320 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:44:32.985433 | orchestrator | 2025-05-29 00:44:32.985537 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-29 00:44:32.986149 | orchestrator | Thursday 29 May 2025 00:44:32 +0000 (0:00:00.206) 0:00:52.255 ********** 2025-05-29 00:44:33.183974 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:44:33.184059 | orchestrator | 2025-05-29 00:44:33.184074 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-29 00:44:33.184087 | orchestrator | Thursday 29 May 2025 00:44:33 +0000 (0:00:00.195) 0:00:52.450 ********** 2025-05-29 00:44:33.403355 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:44:33.403682 | orchestrator | 2025-05-29 00:44:33.404376 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-29 00:44:33.405082 | orchestrator | Thursday 29 May 2025 00:44:33 +0000 (0:00:00.224) 0:00:52.675 ********** 2025-05-29 00:44:34.067612 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:44:34.068303 | orchestrator | 2025-05-29 00:44:34.068660 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-29 00:44:34.069299 | orchestrator | Thursday 29 May 2025 00:44:34 +0000 (0:00:00.662) 0:00:53.338 ********** 2025-05-29 00:44:34.279270 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:44:34.279407 | orchestrator | 2025-05-29 00:44:34.279482 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-29 00:44:34.282303 | orchestrator | Thursday 29 May 2025 00:44:34 +0000 (0:00:00.209) 0:00:53.548 ********** 2025-05-29 00:44:34.471348 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:44:34.473376 | orchestrator | 2025-05-29 00:44:34.473421 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-29 00:44:34.474680 | orchestrator | Thursday 29 May 2025 00:44:34 +0000 (0:00:00.194) 0:00:53.742 ********** 2025-05-29 00:44:34.678651 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:44:34.679154 | orchestrator | 2025-05-29 00:44:34.679847 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-29 00:44:34.680420 | orchestrator | Thursday 29 May 2025 00:44:34 +0000 (0:00:00.207) 0:00:53.950 ********** 2025-05-29 00:44:35.099971 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_13985d86-b513-49a7-ae6a-0b62fccaa428) 2025-05-29 00:44:35.100074 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_13985d86-b513-49a7-ae6a-0b62fccaa428) 2025-05-29 00:44:35.100295 | orchestrator | 2025-05-29 00:44:35.100962 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-29 00:44:35.101753 | orchestrator | Thursday 29 May 2025 00:44:35 +0000 (0:00:00.421) 0:00:54.371 ********** 2025-05-29 00:44:35.515998 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_baffed07-1ba6-4c69-bef3-fae49f76e29e) 2025-05-29 00:44:35.516924 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_baffed07-1ba6-4c69-bef3-fae49f76e29e) 2025-05-29 00:44:35.517755 | orchestrator | 2025-05-29 00:44:35.519423 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-29 00:44:35.519705 | orchestrator | Thursday 29 May 2025 00:44:35 +0000 (0:00:00.414) 0:00:54.786 ********** 2025-05-29 00:44:35.971591 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_6be5e360-5fe4-4176-98be-0e33dc067da2) 2025-05-29 00:44:35.971758 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_6be5e360-5fe4-4176-98be-0e33dc067da2) 2025-05-29 00:44:35.972669 | orchestrator | 2025-05-29 00:44:35.973468 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-29 00:44:35.974493 | orchestrator | Thursday 29 May 2025 00:44:35 +0000 (0:00:00.456) 0:00:55.242 ********** 2025-05-29 00:44:36.453511 | orchestrator | ok: [testbed-node-5] => (item=scsi-0QEMU_QEMU_HARDDISK_c045ec7e-dfd2-45aa-a5da-e7ebbe64f976) 2025-05-29 00:44:36.455069 | orchestrator | ok: [testbed-node-5] => (item=scsi-SQEMU_QEMU_HARDDISK_c045ec7e-dfd2-45aa-a5da-e7ebbe64f976) 2025-05-29 00:44:36.456285 | orchestrator | 2025-05-29 00:44:36.459578 | orchestrator | TASK [Add known links to the list of available block devices] ****************** 2025-05-29 00:44:36.459802 | orchestrator | Thursday 29 May 2025 00:44:36 +0000 (0:00:00.480) 0:00:55.723 ********** 2025-05-29 00:44:36.796037 | orchestrator | ok: [testbed-node-5] => (item=ata-QEMU_DVD-ROM_QM00001) 2025-05-29 00:44:36.796700 | orchestrator | 2025-05-29 00:44:36.797971 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-29 00:44:36.798899 | orchestrator | Thursday 29 May 2025 00:44:36 +0000 (0:00:00.343) 0:00:56.067 ********** 2025-05-29 00:44:37.478218 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop0) 2025-05-29 00:44:37.478330 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop1) 2025-05-29 00:44:37.478670 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop2) 2025-05-29 00:44:37.479966 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop3) 2025-05-29 00:44:37.479989 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop4) 2025-05-29 00:44:37.480671 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop5) 2025-05-29 00:44:37.481317 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop6) 2025-05-29 00:44:37.482457 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=loop7) 2025-05-29 00:44:37.482576 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sda) 2025-05-29 00:44:37.482829 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdb) 2025-05-29 00:44:37.483312 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdc) 2025-05-29 00:44:37.483798 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sdd) 2025-05-29 00:44:37.484537 | orchestrator | included: /ansible/tasks/_add-device-partitions.yml for testbed-node-5 => (item=sr0) 2025-05-29 00:44:37.484829 | orchestrator | 2025-05-29 00:44:37.485227 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-29 00:44:37.485567 | orchestrator | Thursday 29 May 2025 00:44:37 +0000 (0:00:00.681) 0:00:56.748 ********** 2025-05-29 00:44:37.690197 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:44:37.690932 | orchestrator | 2025-05-29 00:44:37.691935 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-29 00:44:37.692589 | orchestrator | Thursday 29 May 2025 00:44:37 +0000 (0:00:00.213) 0:00:56.962 ********** 2025-05-29 00:44:37.903606 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:44:37.903978 | orchestrator | 2025-05-29 00:44:37.904932 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-29 00:44:37.905457 | orchestrator | Thursday 29 May 2025 00:44:37 +0000 (0:00:00.213) 0:00:57.176 ********** 2025-05-29 00:44:38.107445 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:44:38.107774 | orchestrator | 2025-05-29 00:44:38.108356 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-29 00:44:38.108867 | orchestrator | Thursday 29 May 2025 00:44:38 +0000 (0:00:00.201) 0:00:57.377 ********** 2025-05-29 00:44:38.320216 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:44:38.320900 | orchestrator | 2025-05-29 00:44:38.321642 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-29 00:44:38.324292 | orchestrator | Thursday 29 May 2025 00:44:38 +0000 (0:00:00.213) 0:00:57.590 ********** 2025-05-29 00:44:38.539097 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:44:38.539762 | orchestrator | 2025-05-29 00:44:38.540568 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-29 00:44:38.541323 | orchestrator | Thursday 29 May 2025 00:44:38 +0000 (0:00:00.220) 0:00:57.811 ********** 2025-05-29 00:44:38.754433 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:44:38.755128 | orchestrator | 2025-05-29 00:44:38.755852 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-29 00:44:38.756711 | orchestrator | Thursday 29 May 2025 00:44:38 +0000 (0:00:00.213) 0:00:58.024 ********** 2025-05-29 00:44:38.996343 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:44:38.996607 | orchestrator | 2025-05-29 00:44:38.997779 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-29 00:44:38.998320 | orchestrator | Thursday 29 May 2025 00:44:38 +0000 (0:00:00.243) 0:00:58.267 ********** 2025-05-29 00:44:39.205978 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:44:39.206626 | orchestrator | 2025-05-29 00:44:39.207844 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-29 00:44:39.209582 | orchestrator | Thursday 29 May 2025 00:44:39 +0000 (0:00:00.210) 0:00:58.477 ********** 2025-05-29 00:44:40.089363 | orchestrator | ok: [testbed-node-5] => (item=sda1) 2025-05-29 00:44:40.089980 | orchestrator | ok: [testbed-node-5] => (item=sda14) 2025-05-29 00:44:40.091393 | orchestrator | ok: [testbed-node-5] => (item=sda15) 2025-05-29 00:44:40.092685 | orchestrator | ok: [testbed-node-5] => (item=sda16) 2025-05-29 00:44:40.093657 | orchestrator | 2025-05-29 00:44:40.095973 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-29 00:44:40.096381 | orchestrator | Thursday 29 May 2025 00:44:40 +0000 (0:00:00.880) 0:00:59.358 ********** 2025-05-29 00:44:40.321241 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:44:40.321357 | orchestrator | 2025-05-29 00:44:40.321374 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-29 00:44:40.321387 | orchestrator | Thursday 29 May 2025 00:44:40 +0000 (0:00:00.230) 0:00:59.588 ********** 2025-05-29 00:44:40.825157 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:44:40.825320 | orchestrator | 2025-05-29 00:44:40.826215 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-29 00:44:40.826348 | orchestrator | Thursday 29 May 2025 00:44:40 +0000 (0:00:00.507) 0:01:00.095 ********** 2025-05-29 00:44:41.034927 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:44:41.035466 | orchestrator | 2025-05-29 00:44:41.036752 | orchestrator | TASK [Add known partitions to the list of available block devices] ************* 2025-05-29 00:44:41.037338 | orchestrator | Thursday 29 May 2025 00:44:41 +0000 (0:00:00.210) 0:01:00.306 ********** 2025-05-29 00:44:41.249862 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:44:41.250132 | orchestrator | 2025-05-29 00:44:41.250924 | orchestrator | TASK [Check whether ceph_db_wal_devices is used exclusively] ******************* 2025-05-29 00:44:41.251913 | orchestrator | Thursday 29 May 2025 00:44:41 +0000 (0:00:00.203) 0:01:00.510 ********** 2025-05-29 00:44:41.378392 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:44:41.378490 | orchestrator | 2025-05-29 00:44:41.379728 | orchestrator | TASK [Create dict of block VGs -> PVs from ceph_osd_devices] ******************* 2025-05-29 00:44:41.381194 | orchestrator | Thursday 29 May 2025 00:44:41 +0000 (0:00:00.138) 0:01:00.649 ********** 2025-05-29 00:44:41.589510 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdb', 'value': {'osd_lvm_uuid': 'a1850b6b-a1b4-57b7-9f5e-deb9029890df'}}) 2025-05-29 00:44:41.590740 | orchestrator | ok: [testbed-node-5] => (item={'key': 'sdc', 'value': {'osd_lvm_uuid': '05ae814f-03ae-5777-aef4-91f0b0270e90'}}) 2025-05-29 00:44:41.591989 | orchestrator | 2025-05-29 00:44:41.592737 | orchestrator | TASK [Create block VGs] ******************************************************** 2025-05-29 00:44:41.594331 | orchestrator | Thursday 29 May 2025 00:44:41 +0000 (0:00:00.212) 0:01:00.861 ********** 2025-05-29 00:44:43.454275 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-a1850b6b-a1b4-57b7-9f5e-deb9029890df', 'data_vg': 'ceph-a1850b6b-a1b4-57b7-9f5e-deb9029890df'}) 2025-05-29 00:44:43.454401 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-05ae814f-03ae-5777-aef4-91f0b0270e90', 'data_vg': 'ceph-05ae814f-03ae-5777-aef4-91f0b0270e90'}) 2025-05-29 00:44:43.455266 | orchestrator | 2025-05-29 00:44:43.457608 | orchestrator | TASK [Print 'Create block VGs'] ************************************************ 2025-05-29 00:44:43.458980 | orchestrator | Thursday 29 May 2025 00:44:43 +0000 (0:00:01.862) 0:01:02.724 ********** 2025-05-29 00:44:43.624063 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a1850b6b-a1b4-57b7-9f5e-deb9029890df', 'data_vg': 'ceph-a1850b6b-a1b4-57b7-9f5e-deb9029890df'})  2025-05-29 00:44:43.624945 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-05ae814f-03ae-5777-aef4-91f0b0270e90', 'data_vg': 'ceph-05ae814f-03ae-5777-aef4-91f0b0270e90'})  2025-05-29 00:44:43.625289 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:44:43.625982 | orchestrator | 2025-05-29 00:44:43.626903 | orchestrator | TASK [Create block LVs] ******************************************************** 2025-05-29 00:44:43.627174 | orchestrator | Thursday 29 May 2025 00:44:43 +0000 (0:00:00.171) 0:01:02.895 ********** 2025-05-29 00:44:45.012626 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-a1850b6b-a1b4-57b7-9f5e-deb9029890df', 'data_vg': 'ceph-a1850b6b-a1b4-57b7-9f5e-deb9029890df'}) 2025-05-29 00:44:45.012719 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-05ae814f-03ae-5777-aef4-91f0b0270e90', 'data_vg': 'ceph-05ae814f-03ae-5777-aef4-91f0b0270e90'}) 2025-05-29 00:44:45.013595 | orchestrator | 2025-05-29 00:44:45.014793 | orchestrator | TASK [Print 'Create block LVs'] ************************************************ 2025-05-29 00:44:45.015287 | orchestrator | Thursday 29 May 2025 00:44:45 +0000 (0:00:01.386) 0:01:04.281 ********** 2025-05-29 00:44:45.346814 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a1850b6b-a1b4-57b7-9f5e-deb9029890df', 'data_vg': 'ceph-a1850b6b-a1b4-57b7-9f5e-deb9029890df'})  2025-05-29 00:44:45.346921 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-05ae814f-03ae-5777-aef4-91f0b0270e90', 'data_vg': 'ceph-05ae814f-03ae-5777-aef4-91f0b0270e90'})  2025-05-29 00:44:45.347519 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:44:45.348865 | orchestrator | 2025-05-29 00:44:45.349769 | orchestrator | TASK [Create DB VGs] *********************************************************** 2025-05-29 00:44:45.350621 | orchestrator | Thursday 29 May 2025 00:44:45 +0000 (0:00:00.335) 0:01:04.616 ********** 2025-05-29 00:44:45.490873 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:44:45.492339 | orchestrator | 2025-05-29 00:44:45.495455 | orchestrator | TASK [Print 'Create DB VGs'] *************************************************** 2025-05-29 00:44:45.495490 | orchestrator | Thursday 29 May 2025 00:44:45 +0000 (0:00:00.144) 0:01:04.761 ********** 2025-05-29 00:44:45.680359 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a1850b6b-a1b4-57b7-9f5e-deb9029890df', 'data_vg': 'ceph-a1850b6b-a1b4-57b7-9f5e-deb9029890df'})  2025-05-29 00:44:45.680669 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-05ae814f-03ae-5777-aef4-91f0b0270e90', 'data_vg': 'ceph-05ae814f-03ae-5777-aef4-91f0b0270e90'})  2025-05-29 00:44:45.681993 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:44:45.682634 | orchestrator | 2025-05-29 00:44:45.684136 | orchestrator | TASK [Create WAL VGs] ********************************************************** 2025-05-29 00:44:45.685173 | orchestrator | Thursday 29 May 2025 00:44:45 +0000 (0:00:00.187) 0:01:04.949 ********** 2025-05-29 00:44:45.829070 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:44:45.829699 | orchestrator | 2025-05-29 00:44:45.830567 | orchestrator | TASK [Print 'Create WAL VGs'] ************************************************** 2025-05-29 00:44:45.831929 | orchestrator | Thursday 29 May 2025 00:44:45 +0000 (0:00:00.151) 0:01:05.100 ********** 2025-05-29 00:44:46.010492 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a1850b6b-a1b4-57b7-9f5e-deb9029890df', 'data_vg': 'ceph-a1850b6b-a1b4-57b7-9f5e-deb9029890df'})  2025-05-29 00:44:46.010689 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-05ae814f-03ae-5777-aef4-91f0b0270e90', 'data_vg': 'ceph-05ae814f-03ae-5777-aef4-91f0b0270e90'})  2025-05-29 00:44:46.012269 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:44:46.014012 | orchestrator | 2025-05-29 00:44:46.014773 | orchestrator | TASK [Create DB+WAL VGs] ******************************************************* 2025-05-29 00:44:46.017774 | orchestrator | Thursday 29 May 2025 00:44:46 +0000 (0:00:00.180) 0:01:05.281 ********** 2025-05-29 00:44:46.167794 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:44:46.167893 | orchestrator | 2025-05-29 00:44:46.168479 | orchestrator | TASK [Print 'Create DB+WAL VGs'] *********************************************** 2025-05-29 00:44:46.168873 | orchestrator | Thursday 29 May 2025 00:44:46 +0000 (0:00:00.158) 0:01:05.439 ********** 2025-05-29 00:44:46.345953 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a1850b6b-a1b4-57b7-9f5e-deb9029890df', 'data_vg': 'ceph-a1850b6b-a1b4-57b7-9f5e-deb9029890df'})  2025-05-29 00:44:46.347067 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-05ae814f-03ae-5777-aef4-91f0b0270e90', 'data_vg': 'ceph-05ae814f-03ae-5777-aef4-91f0b0270e90'})  2025-05-29 00:44:46.348448 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:44:46.349408 | orchestrator | 2025-05-29 00:44:46.350554 | orchestrator | TASK [Prepare variables for OSD count check] *********************************** 2025-05-29 00:44:46.351185 | orchestrator | Thursday 29 May 2025 00:44:46 +0000 (0:00:00.177) 0:01:05.617 ********** 2025-05-29 00:44:46.490064 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:44:46.491371 | orchestrator | 2025-05-29 00:44:46.492384 | orchestrator | TASK [Count OSDs put on ceph_db_devices defined in lvm_volumes] **************** 2025-05-29 00:44:46.494120 | orchestrator | Thursday 29 May 2025 00:44:46 +0000 (0:00:00.144) 0:01:05.761 ********** 2025-05-29 00:44:46.663029 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a1850b6b-a1b4-57b7-9f5e-deb9029890df', 'data_vg': 'ceph-a1850b6b-a1b4-57b7-9f5e-deb9029890df'})  2025-05-29 00:44:46.663187 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-05ae814f-03ae-5777-aef4-91f0b0270e90', 'data_vg': 'ceph-05ae814f-03ae-5777-aef4-91f0b0270e90'})  2025-05-29 00:44:46.663308 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:44:46.664507 | orchestrator | 2025-05-29 00:44:46.665268 | orchestrator | TASK [Count OSDs put on ceph_wal_devices defined in lvm_volumes] *************** 2025-05-29 00:44:46.665882 | orchestrator | Thursday 29 May 2025 00:44:46 +0000 (0:00:00.173) 0:01:05.935 ********** 2025-05-29 00:44:46.822597 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a1850b6b-a1b4-57b7-9f5e-deb9029890df', 'data_vg': 'ceph-a1850b6b-a1b4-57b7-9f5e-deb9029890df'})  2025-05-29 00:44:46.823332 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-05ae814f-03ae-5777-aef4-91f0b0270e90', 'data_vg': 'ceph-05ae814f-03ae-5777-aef4-91f0b0270e90'})  2025-05-29 00:44:46.825305 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:44:46.826727 | orchestrator | 2025-05-29 00:44:46.827442 | orchestrator | TASK [Count OSDs put on ceph_db_wal_devices defined in lvm_volumes] ************ 2025-05-29 00:44:46.828054 | orchestrator | Thursday 29 May 2025 00:44:46 +0000 (0:00:00.159) 0:01:06.094 ********** 2025-05-29 00:44:47.008971 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a1850b6b-a1b4-57b7-9f5e-deb9029890df', 'data_vg': 'ceph-a1850b6b-a1b4-57b7-9f5e-deb9029890df'})  2025-05-29 00:44:47.009751 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-05ae814f-03ae-5777-aef4-91f0b0270e90', 'data_vg': 'ceph-05ae814f-03ae-5777-aef4-91f0b0270e90'})  2025-05-29 00:44:47.012950 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:44:47.013144 | orchestrator | 2025-05-29 00:44:47.013167 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB VG] ********************* 2025-05-29 00:44:47.014301 | orchestrator | Thursday 29 May 2025 00:44:47 +0000 (0:00:00.185) 0:01:06.279 ********** 2025-05-29 00:44:47.148647 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:44:47.149387 | orchestrator | 2025-05-29 00:44:47.150635 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a WAL VG] ******************** 2025-05-29 00:44:47.151347 | orchestrator | Thursday 29 May 2025 00:44:47 +0000 (0:00:00.140) 0:01:06.420 ********** 2025-05-29 00:44:47.491682 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:44:47.493619 | orchestrator | 2025-05-29 00:44:47.494079 | orchestrator | TASK [Fail if number of OSDs exceeds num_osds for a DB+WAL VG] ***************** 2025-05-29 00:44:47.495056 | orchestrator | Thursday 29 May 2025 00:44:47 +0000 (0:00:00.342) 0:01:06.763 ********** 2025-05-29 00:44:47.641819 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:44:47.642830 | orchestrator | 2025-05-29 00:44:47.643883 | orchestrator | TASK [Print number of OSDs wanted per DB VG] *********************************** 2025-05-29 00:44:47.644406 | orchestrator | Thursday 29 May 2025 00:44:47 +0000 (0:00:00.148) 0:01:06.911 ********** 2025-05-29 00:44:47.794491 | orchestrator | ok: [testbed-node-5] => { 2025-05-29 00:44:47.795495 | orchestrator |  "_num_osds_wanted_per_db_vg": {} 2025-05-29 00:44:47.796522 | orchestrator | } 2025-05-29 00:44:47.799202 | orchestrator | 2025-05-29 00:44:47.799509 | orchestrator | TASK [Print number of OSDs wanted per WAL VG] ********************************** 2025-05-29 00:44:47.799542 | orchestrator | Thursday 29 May 2025 00:44:47 +0000 (0:00:00.154) 0:01:07.066 ********** 2025-05-29 00:44:47.938455 | orchestrator | ok: [testbed-node-5] => { 2025-05-29 00:44:47.939339 | orchestrator |  "_num_osds_wanted_per_wal_vg": {} 2025-05-29 00:44:47.940286 | orchestrator | } 2025-05-29 00:44:47.941087 | orchestrator | 2025-05-29 00:44:47.943740 | orchestrator | TASK [Print number of OSDs wanted per DB+WAL VG] ******************************* 2025-05-29 00:44:47.943789 | orchestrator | Thursday 29 May 2025 00:44:47 +0000 (0:00:00.143) 0:01:07.209 ********** 2025-05-29 00:44:48.081957 | orchestrator | ok: [testbed-node-5] => { 2025-05-29 00:44:48.082589 | orchestrator |  "_num_osds_wanted_per_db_wal_vg": {} 2025-05-29 00:44:48.083527 | orchestrator | } 2025-05-29 00:44:48.084643 | orchestrator | 2025-05-29 00:44:48.085782 | orchestrator | TASK [Gather DB VGs with total and available size in bytes] ******************** 2025-05-29 00:44:48.086817 | orchestrator | Thursday 29 May 2025 00:44:48 +0000 (0:00:00.144) 0:01:07.353 ********** 2025-05-29 00:44:48.632614 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:44:48.633347 | orchestrator | 2025-05-29 00:44:48.635649 | orchestrator | TASK [Gather WAL VGs with total and available size in bytes] ******************* 2025-05-29 00:44:48.636493 | orchestrator | Thursday 29 May 2025 00:44:48 +0000 (0:00:00.548) 0:01:07.902 ********** 2025-05-29 00:44:49.177445 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:44:49.179964 | orchestrator | 2025-05-29 00:44:49.179999 | orchestrator | TASK [Gather DB+WAL VGs with total and available size in bytes] **************** 2025-05-29 00:44:49.182144 | orchestrator | Thursday 29 May 2025 00:44:49 +0000 (0:00:00.543) 0:01:08.445 ********** 2025-05-29 00:44:49.730921 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:44:49.731532 | orchestrator | 2025-05-29 00:44:49.732336 | orchestrator | TASK [Combine JSON from _db/wal/db_wal_vgs_cmd_output] ************************* 2025-05-29 00:44:49.733313 | orchestrator | Thursday 29 May 2025 00:44:49 +0000 (0:00:00.555) 0:01:09.001 ********** 2025-05-29 00:44:49.891296 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:44:49.891709 | orchestrator | 2025-05-29 00:44:49.892867 | orchestrator | TASK [Calculate VG sizes (without buffer)] ************************************* 2025-05-29 00:44:49.893673 | orchestrator | Thursday 29 May 2025 00:44:49 +0000 (0:00:00.161) 0:01:09.163 ********** 2025-05-29 00:44:50.008412 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:44:50.009242 | orchestrator | 2025-05-29 00:44:50.010481 | orchestrator | TASK [Calculate VG sizes (with buffer)] **************************************** 2025-05-29 00:44:50.013549 | orchestrator | Thursday 29 May 2025 00:44:50 +0000 (0:00:00.116) 0:01:09.280 ********** 2025-05-29 00:44:50.323685 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:44:50.323874 | orchestrator | 2025-05-29 00:44:50.324608 | orchestrator | TASK [Print LVM VGs report data] *********************************************** 2025-05-29 00:44:50.325165 | orchestrator | Thursday 29 May 2025 00:44:50 +0000 (0:00:00.316) 0:01:09.596 ********** 2025-05-29 00:44:50.488941 | orchestrator | ok: [testbed-node-5] => { 2025-05-29 00:44:50.489344 | orchestrator |  "vgs_report": { 2025-05-29 00:44:50.490417 | orchestrator |  "vg": [] 2025-05-29 00:44:50.491646 | orchestrator |  } 2025-05-29 00:44:50.493909 | orchestrator | } 2025-05-29 00:44:50.494657 | orchestrator | 2025-05-29 00:44:50.495493 | orchestrator | TASK [Print LVM VG sizes] ****************************************************** 2025-05-29 00:44:50.496237 | orchestrator | Thursday 29 May 2025 00:44:50 +0000 (0:00:00.162) 0:01:09.759 ********** 2025-05-29 00:44:50.629120 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:44:50.629828 | orchestrator | 2025-05-29 00:44:50.630778 | orchestrator | TASK [Calculate size needed for LVs on ceph_db_devices] ************************ 2025-05-29 00:44:50.631441 | orchestrator | Thursday 29 May 2025 00:44:50 +0000 (0:00:00.140) 0:01:09.900 ********** 2025-05-29 00:44:50.768148 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:44:50.768868 | orchestrator | 2025-05-29 00:44:50.769436 | orchestrator | TASK [Print size needed for LVs on ceph_db_devices] **************************** 2025-05-29 00:44:50.770426 | orchestrator | Thursday 29 May 2025 00:44:50 +0000 (0:00:00.140) 0:01:10.040 ********** 2025-05-29 00:44:50.919597 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:44:50.920825 | orchestrator | 2025-05-29 00:44:50.921457 | orchestrator | TASK [Fail if size of DB LVs on ceph_db_devices > available] ******************* 2025-05-29 00:44:50.922455 | orchestrator | Thursday 29 May 2025 00:44:50 +0000 (0:00:00.150) 0:01:10.191 ********** 2025-05-29 00:44:51.066784 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:44:51.066947 | orchestrator | 2025-05-29 00:44:51.067206 | orchestrator | TASK [Calculate size needed for LVs on ceph_wal_devices] *********************** 2025-05-29 00:44:51.068292 | orchestrator | Thursday 29 May 2025 00:44:51 +0000 (0:00:00.148) 0:01:10.339 ********** 2025-05-29 00:44:51.226491 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:44:51.226743 | orchestrator | 2025-05-29 00:44:51.227944 | orchestrator | TASK [Print size needed for LVs on ceph_wal_devices] *************************** 2025-05-29 00:44:51.228680 | orchestrator | Thursday 29 May 2025 00:44:51 +0000 (0:00:00.159) 0:01:10.498 ********** 2025-05-29 00:44:51.370677 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:44:51.371603 | orchestrator | 2025-05-29 00:44:51.371883 | orchestrator | TASK [Fail if size of WAL LVs on ceph_wal_devices > available] ***************** 2025-05-29 00:44:51.374002 | orchestrator | Thursday 29 May 2025 00:44:51 +0000 (0:00:00.142) 0:01:10.641 ********** 2025-05-29 00:44:51.510840 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:44:51.511004 | orchestrator | 2025-05-29 00:44:51.513570 | orchestrator | TASK [Calculate size needed for WAL LVs on ceph_db_wal_devices] **************** 2025-05-29 00:44:51.515309 | orchestrator | Thursday 29 May 2025 00:44:51 +0000 (0:00:00.140) 0:01:10.781 ********** 2025-05-29 00:44:51.652216 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:44:51.652553 | orchestrator | 2025-05-29 00:44:51.653455 | orchestrator | TASK [Print size needed for WAL LVs on ceph_db_wal_devices] ******************** 2025-05-29 00:44:51.654755 | orchestrator | Thursday 29 May 2025 00:44:51 +0000 (0:00:00.141) 0:01:10.923 ********** 2025-05-29 00:44:51.794865 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:44:51.795727 | orchestrator | 2025-05-29 00:44:51.796712 | orchestrator | TASK [Calculate size needed for DB LVs on ceph_db_wal_devices] ***************** 2025-05-29 00:44:51.797667 | orchestrator | Thursday 29 May 2025 00:44:51 +0000 (0:00:00.143) 0:01:11.066 ********** 2025-05-29 00:44:51.954283 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:44:51.954367 | orchestrator | 2025-05-29 00:44:51.954458 | orchestrator | TASK [Print size needed for DB LVs on ceph_db_wal_devices] ********************* 2025-05-29 00:44:51.954868 | orchestrator | Thursday 29 May 2025 00:44:51 +0000 (0:00:00.158) 0:01:11.225 ********** 2025-05-29 00:44:52.279852 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:44:52.280565 | orchestrator | 2025-05-29 00:44:52.281646 | orchestrator | TASK [Fail if size of DB+WAL LVs on ceph_db_wal_devices > available] *********** 2025-05-29 00:44:52.282342 | orchestrator | Thursday 29 May 2025 00:44:52 +0000 (0:00:00.326) 0:01:11.552 ********** 2025-05-29 00:44:52.436260 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:44:52.437546 | orchestrator | 2025-05-29 00:44:52.438607 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_devices] ************************* 2025-05-29 00:44:52.439742 | orchestrator | Thursday 29 May 2025 00:44:52 +0000 (0:00:00.156) 0:01:11.708 ********** 2025-05-29 00:44:52.576502 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:44:52.577880 | orchestrator | 2025-05-29 00:44:52.579456 | orchestrator | TASK [Fail if DB LV size < 30 GiB for ceph_db_wal_devices] ********************* 2025-05-29 00:44:52.580268 | orchestrator | Thursday 29 May 2025 00:44:52 +0000 (0:00:00.140) 0:01:11.848 ********** 2025-05-29 00:44:52.733613 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:44:52.735142 | orchestrator | 2025-05-29 00:44:52.736367 | orchestrator | TASK [Create DB LVs for ceph_db_devices] *************************************** 2025-05-29 00:44:52.737394 | orchestrator | Thursday 29 May 2025 00:44:52 +0000 (0:00:00.153) 0:01:12.002 ********** 2025-05-29 00:44:52.911562 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a1850b6b-a1b4-57b7-9f5e-deb9029890df', 'data_vg': 'ceph-a1850b6b-a1b4-57b7-9f5e-deb9029890df'})  2025-05-29 00:44:52.912930 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-05ae814f-03ae-5777-aef4-91f0b0270e90', 'data_vg': 'ceph-05ae814f-03ae-5777-aef4-91f0b0270e90'})  2025-05-29 00:44:52.913721 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:44:52.914981 | orchestrator | 2025-05-29 00:44:52.915787 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_devices'] ******************************* 2025-05-29 00:44:52.916523 | orchestrator | Thursday 29 May 2025 00:44:52 +0000 (0:00:00.179) 0:01:12.182 ********** 2025-05-29 00:44:53.068306 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a1850b6b-a1b4-57b7-9f5e-deb9029890df', 'data_vg': 'ceph-a1850b6b-a1b4-57b7-9f5e-deb9029890df'})  2025-05-29 00:44:53.069648 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-05ae814f-03ae-5777-aef4-91f0b0270e90', 'data_vg': 'ceph-05ae814f-03ae-5777-aef4-91f0b0270e90'})  2025-05-29 00:44:53.070419 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:44:53.071485 | orchestrator | 2025-05-29 00:44:53.072566 | orchestrator | TASK [Create WAL LVs for ceph_wal_devices] ************************************* 2025-05-29 00:44:53.072957 | orchestrator | Thursday 29 May 2025 00:44:53 +0000 (0:00:00.157) 0:01:12.340 ********** 2025-05-29 00:44:53.239746 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a1850b6b-a1b4-57b7-9f5e-deb9029890df', 'data_vg': 'ceph-a1850b6b-a1b4-57b7-9f5e-deb9029890df'})  2025-05-29 00:44:53.242136 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-05ae814f-03ae-5777-aef4-91f0b0270e90', 'data_vg': 'ceph-05ae814f-03ae-5777-aef4-91f0b0270e90'})  2025-05-29 00:44:53.243629 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:44:53.243670 | orchestrator | 2025-05-29 00:44:53.244027 | orchestrator | TASK [Print 'Create WAL LVs for ceph_wal_devices'] ***************************** 2025-05-29 00:44:53.244856 | orchestrator | Thursday 29 May 2025 00:44:53 +0000 (0:00:00.171) 0:01:12.511 ********** 2025-05-29 00:44:53.405935 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a1850b6b-a1b4-57b7-9f5e-deb9029890df', 'data_vg': 'ceph-a1850b6b-a1b4-57b7-9f5e-deb9029890df'})  2025-05-29 00:44:53.407209 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-05ae814f-03ae-5777-aef4-91f0b0270e90', 'data_vg': 'ceph-05ae814f-03ae-5777-aef4-91f0b0270e90'})  2025-05-29 00:44:53.408033 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:44:53.409075 | orchestrator | 2025-05-29 00:44:53.409973 | orchestrator | TASK [Create WAL LVs for ceph_db_wal_devices] ********************************** 2025-05-29 00:44:53.413272 | orchestrator | Thursday 29 May 2025 00:44:53 +0000 (0:00:00.166) 0:01:12.677 ********** 2025-05-29 00:44:53.584674 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a1850b6b-a1b4-57b7-9f5e-deb9029890df', 'data_vg': 'ceph-a1850b6b-a1b4-57b7-9f5e-deb9029890df'})  2025-05-29 00:44:53.585687 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-05ae814f-03ae-5777-aef4-91f0b0270e90', 'data_vg': 'ceph-05ae814f-03ae-5777-aef4-91f0b0270e90'})  2025-05-29 00:44:53.587761 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:44:53.591390 | orchestrator | 2025-05-29 00:44:53.591416 | orchestrator | TASK [Print 'Create WAL LVs for ceph_db_wal_devices'] ************************** 2025-05-29 00:44:53.591429 | orchestrator | Thursday 29 May 2025 00:44:53 +0000 (0:00:00.179) 0:01:12.856 ********** 2025-05-29 00:44:53.759004 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a1850b6b-a1b4-57b7-9f5e-deb9029890df', 'data_vg': 'ceph-a1850b6b-a1b4-57b7-9f5e-deb9029890df'})  2025-05-29 00:44:53.760580 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-05ae814f-03ae-5777-aef4-91f0b0270e90', 'data_vg': 'ceph-05ae814f-03ae-5777-aef4-91f0b0270e90'})  2025-05-29 00:44:53.761417 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:44:53.762303 | orchestrator | 2025-05-29 00:44:53.762808 | orchestrator | TASK [Create DB LVs for ceph_db_wal_devices] *********************************** 2025-05-29 00:44:53.763438 | orchestrator | Thursday 29 May 2025 00:44:53 +0000 (0:00:00.171) 0:01:13.028 ********** 2025-05-29 00:44:53.934971 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a1850b6b-a1b4-57b7-9f5e-deb9029890df', 'data_vg': 'ceph-a1850b6b-a1b4-57b7-9f5e-deb9029890df'})  2025-05-29 00:44:53.935062 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-05ae814f-03ae-5777-aef4-91f0b0270e90', 'data_vg': 'ceph-05ae814f-03ae-5777-aef4-91f0b0270e90'})  2025-05-29 00:44:53.936470 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:44:53.937645 | orchestrator | 2025-05-29 00:44:53.938809 | orchestrator | TASK [Print 'Create DB LVs for ceph_db_wal_devices'] *************************** 2025-05-29 00:44:53.939259 | orchestrator | Thursday 29 May 2025 00:44:53 +0000 (0:00:00.176) 0:01:13.205 ********** 2025-05-29 00:44:54.308933 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a1850b6b-a1b4-57b7-9f5e-deb9029890df', 'data_vg': 'ceph-a1850b6b-a1b4-57b7-9f5e-deb9029890df'})  2025-05-29 00:44:54.309910 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-05ae814f-03ae-5777-aef4-91f0b0270e90', 'data_vg': 'ceph-05ae814f-03ae-5777-aef4-91f0b0270e90'})  2025-05-29 00:44:54.310551 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:44:54.312905 | orchestrator | 2025-05-29 00:44:54.315991 | orchestrator | TASK [Get list of Ceph LVs with associated VGs] ******************************** 2025-05-29 00:44:54.316025 | orchestrator | Thursday 29 May 2025 00:44:54 +0000 (0:00:00.375) 0:01:13.580 ********** 2025-05-29 00:44:54.898530 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:44:54.898942 | orchestrator | 2025-05-29 00:44:54.900845 | orchestrator | TASK [Get list of Ceph PVs with associated VGs] ******************************** 2025-05-29 00:44:54.900885 | orchestrator | Thursday 29 May 2025 00:44:54 +0000 (0:00:00.587) 0:01:14.168 ********** 2025-05-29 00:44:55.437827 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:44:55.437916 | orchestrator | 2025-05-29 00:44:55.438255 | orchestrator | TASK [Combine JSON from _lvs_cmd_output/_pvs_cmd_output] *********************** 2025-05-29 00:44:55.439413 | orchestrator | Thursday 29 May 2025 00:44:55 +0000 (0:00:00.539) 0:01:14.708 ********** 2025-05-29 00:44:55.608560 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:44:55.609339 | orchestrator | 2025-05-29 00:44:55.610554 | orchestrator | TASK [Create list of VG/LV names] ********************************************** 2025-05-29 00:44:55.611845 | orchestrator | Thursday 29 May 2025 00:44:55 +0000 (0:00:00.171) 0:01:14.879 ********** 2025-05-29 00:44:55.797812 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-05ae814f-03ae-5777-aef4-91f0b0270e90', 'vg_name': 'ceph-05ae814f-03ae-5777-aef4-91f0b0270e90'}) 2025-05-29 00:44:55.799247 | orchestrator | ok: [testbed-node-5] => (item={'lv_name': 'osd-block-a1850b6b-a1b4-57b7-9f5e-deb9029890df', 'vg_name': 'ceph-a1850b6b-a1b4-57b7-9f5e-deb9029890df'}) 2025-05-29 00:44:55.799662 | orchestrator | 2025-05-29 00:44:55.802905 | orchestrator | TASK [Fail if block LV defined in lvm_volumes is missing] ********************** 2025-05-29 00:44:55.802999 | orchestrator | Thursday 29 May 2025 00:44:55 +0000 (0:00:00.188) 0:01:15.068 ********** 2025-05-29 00:44:55.974260 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a1850b6b-a1b4-57b7-9f5e-deb9029890df', 'data_vg': 'ceph-a1850b6b-a1b4-57b7-9f5e-deb9029890df'})  2025-05-29 00:44:55.975544 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-05ae814f-03ae-5777-aef4-91f0b0270e90', 'data_vg': 'ceph-05ae814f-03ae-5777-aef4-91f0b0270e90'})  2025-05-29 00:44:55.976264 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:44:55.977288 | orchestrator | 2025-05-29 00:44:55.981243 | orchestrator | TASK [Fail if DB LV defined in lvm_volumes is missing] ************************* 2025-05-29 00:44:55.981283 | orchestrator | Thursday 29 May 2025 00:44:55 +0000 (0:00:00.177) 0:01:15.245 ********** 2025-05-29 00:44:56.145490 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a1850b6b-a1b4-57b7-9f5e-deb9029890df', 'data_vg': 'ceph-a1850b6b-a1b4-57b7-9f5e-deb9029890df'})  2025-05-29 00:44:56.146161 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-05ae814f-03ae-5777-aef4-91f0b0270e90', 'data_vg': 'ceph-05ae814f-03ae-5777-aef4-91f0b0270e90'})  2025-05-29 00:44:56.147862 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:44:56.148415 | orchestrator | 2025-05-29 00:44:56.149338 | orchestrator | TASK [Fail if WAL LV defined in lvm_volumes is missing] ************************ 2025-05-29 00:44:56.150298 | orchestrator | Thursday 29 May 2025 00:44:56 +0000 (0:00:00.171) 0:01:15.416 ********** 2025-05-29 00:44:56.325889 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-a1850b6b-a1b4-57b7-9f5e-deb9029890df', 'data_vg': 'ceph-a1850b6b-a1b4-57b7-9f5e-deb9029890df'})  2025-05-29 00:44:56.328030 | orchestrator | skipping: [testbed-node-5] => (item={'data': 'osd-block-05ae814f-03ae-5777-aef4-91f0b0270e90', 'data_vg': 'ceph-05ae814f-03ae-5777-aef4-91f0b0270e90'})  2025-05-29 00:44:56.330437 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:44:56.331377 | orchestrator | 2025-05-29 00:44:56.332550 | orchestrator | TASK [Print LVM report data] *************************************************** 2025-05-29 00:44:56.333490 | orchestrator | Thursday 29 May 2025 00:44:56 +0000 (0:00:00.176) 0:01:15.593 ********** 2025-05-29 00:44:56.900687 | orchestrator | ok: [testbed-node-5] => { 2025-05-29 00:44:56.903129 | orchestrator |  "lvm_report": { 2025-05-29 00:44:56.904340 | orchestrator |  "lv": [ 2025-05-29 00:44:56.905314 | orchestrator |  { 2025-05-29 00:44:56.906151 | orchestrator |  "lv_name": "osd-block-05ae814f-03ae-5777-aef4-91f0b0270e90", 2025-05-29 00:44:56.907046 | orchestrator |  "vg_name": "ceph-05ae814f-03ae-5777-aef4-91f0b0270e90" 2025-05-29 00:44:56.908112 | orchestrator |  }, 2025-05-29 00:44:56.909066 | orchestrator |  { 2025-05-29 00:44:56.909978 | orchestrator |  "lv_name": "osd-block-a1850b6b-a1b4-57b7-9f5e-deb9029890df", 2025-05-29 00:44:56.910849 | orchestrator |  "vg_name": "ceph-a1850b6b-a1b4-57b7-9f5e-deb9029890df" 2025-05-29 00:44:56.911752 | orchestrator |  } 2025-05-29 00:44:56.912593 | orchestrator |  ], 2025-05-29 00:44:56.913940 | orchestrator |  "pv": [ 2025-05-29 00:44:56.914841 | orchestrator |  { 2025-05-29 00:44:56.915505 | orchestrator |  "pv_name": "/dev/sdb", 2025-05-29 00:44:56.915849 | orchestrator |  "vg_name": "ceph-a1850b6b-a1b4-57b7-9f5e-deb9029890df" 2025-05-29 00:44:56.916690 | orchestrator |  }, 2025-05-29 00:44:56.917476 | orchestrator |  { 2025-05-29 00:44:56.917991 | orchestrator |  "pv_name": "/dev/sdc", 2025-05-29 00:44:56.918783 | orchestrator |  "vg_name": "ceph-05ae814f-03ae-5777-aef4-91f0b0270e90" 2025-05-29 00:44:56.919447 | orchestrator |  } 2025-05-29 00:44:56.919807 | orchestrator |  ] 2025-05-29 00:44:56.920276 | orchestrator |  } 2025-05-29 00:44:56.921117 | orchestrator | } 2025-05-29 00:44:56.921558 | orchestrator | 2025-05-29 00:44:56.922065 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-29 00:44:56.922611 | orchestrator | 2025-05-29 00:44:56 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-29 00:44:56.922857 | orchestrator | 2025-05-29 00:44:56 | INFO  | Please wait and do not abort execution. 2025-05-29 00:44:56.923591 | orchestrator | testbed-node-3 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-05-29 00:44:56.923957 | orchestrator | testbed-node-4 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-05-29 00:44:56.924655 | orchestrator | testbed-node-5 : ok=51  changed=2  unreachable=0 failed=0 skipped=62  rescued=0 ignored=0 2025-05-29 00:44:56.924746 | orchestrator | 2025-05-29 00:44:56.925261 | orchestrator | 2025-05-29 00:44:56.925705 | orchestrator | 2025-05-29 00:44:56.926166 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-29 00:44:56.926620 | orchestrator | Thursday 29 May 2025 00:44:56 +0000 (0:00:00.577) 0:01:16.170 ********** 2025-05-29 00:44:56.927397 | orchestrator | =============================================================================== 2025-05-29 00:44:56.927534 | orchestrator | Create block VGs -------------------------------------------------------- 6.02s 2025-05-29 00:44:56.928011 | orchestrator | Create block LVs -------------------------------------------------------- 4.14s 2025-05-29 00:44:56.928600 | orchestrator | Print LVM report data --------------------------------------------------- 2.12s 2025-05-29 00:44:56.928925 | orchestrator | Gather DB VGs with total and available size in bytes -------------------- 2.01s 2025-05-29 00:44:56.929345 | orchestrator | Add known links to the list of available block devices ------------------ 1.70s 2025-05-29 00:44:56.929796 | orchestrator | Get list of Ceph LVs with associated VGs -------------------------------- 1.68s 2025-05-29 00:44:56.930121 | orchestrator | Gather DB+WAL VGs with total and available size in bytes ---------------- 1.64s 2025-05-29 00:44:56.930690 | orchestrator | Get list of Ceph PVs with associated VGs -------------------------------- 1.64s 2025-05-29 00:44:56.930906 | orchestrator | Add known partitions to the list of available block devices ------------- 1.59s 2025-05-29 00:44:56.931285 | orchestrator | Gather WAL VGs with total and available size in bytes ------------------- 1.58s 2025-05-29 00:44:56.931596 | orchestrator | Get extra vars for Ceph configuration ----------------------------------- 1.03s 2025-05-29 00:44:56.931921 | orchestrator | Add known partitions to the list of available block devices ------------- 0.88s 2025-05-29 00:44:56.932606 | orchestrator | Get initial list of available block devices ----------------------------- 0.81s 2025-05-29 00:44:56.932966 | orchestrator | Print number of OSDs wanted per DB VG ----------------------------------- 0.76s 2025-05-29 00:44:56.933153 | orchestrator | Fail if block LV defined in lvm_volumes is missing ---------------------- 0.73s 2025-05-29 00:44:56.933688 | orchestrator | Print 'Create DB LVs for ceph_db_wal_devices' --------------------------- 0.70s 2025-05-29 00:44:56.933889 | orchestrator | Fail if DB LV size < 30 GiB for ceph_db_wal_devices --------------------- 0.70s 2025-05-29 00:44:56.934212 | orchestrator | Print 'Create block LVs' ------------------------------------------------ 0.70s 2025-05-29 00:44:56.934555 | orchestrator | Add known partitions to the list of available block devices ------------- 0.70s 2025-05-29 00:44:56.934907 | orchestrator | Print 'Create DB LVs for ceph_db_devices' ------------------------------- 0.68s 2025-05-29 00:44:58.819041 | orchestrator | 2025-05-29 00:44:58 | INFO  | Task daa1f7a7-820a-4cd3-a172-a678976cf8fc (facts) was prepared for execution. 2025-05-29 00:44:58.820017 | orchestrator | 2025-05-29 00:44:58 | INFO  | It takes a moment until task daa1f7a7-820a-4cd3-a172-a678976cf8fc (facts) has been started and output is visible here. 2025-05-29 00:45:02.038253 | orchestrator | 2025-05-29 00:45:02.038932 | orchestrator | PLAY [Apply role facts] ******************************************************** 2025-05-29 00:45:02.040107 | orchestrator | 2025-05-29 00:45:02.044061 | orchestrator | TASK [osism.commons.facts : Create custom facts directory] ********************* 2025-05-29 00:45:02.044852 | orchestrator | Thursday 29 May 2025 00:45:02 +0000 (0:00:00.212) 0:00:00.212 ********** 2025-05-29 00:45:03.154579 | orchestrator | ok: [testbed-manager] 2025-05-29 00:45:03.155348 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:45:03.156564 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:45:03.159864 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:45:03.159897 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:45:03.159910 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:45:03.159922 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:45:03.159934 | orchestrator | 2025-05-29 00:45:03.160710 | orchestrator | TASK [osism.commons.facts : Copy fact files] *********************************** 2025-05-29 00:45:03.160818 | orchestrator | Thursday 29 May 2025 00:45:03 +0000 (0:00:01.114) 0:00:01.326 ********** 2025-05-29 00:45:03.360900 | orchestrator | skipping: [testbed-manager] 2025-05-29 00:45:03.462289 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:45:03.548404 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:45:03.637842 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:45:03.717431 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:45:04.448465 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:45:04.452230 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:45:04.455546 | orchestrator | 2025-05-29 00:45:04.458315 | orchestrator | PLAY [Gather facts for all hosts] ********************************************** 2025-05-29 00:45:04.458683 | orchestrator | 2025-05-29 00:45:04.459585 | orchestrator | TASK [Gathers facts about hosts] *********************************************** 2025-05-29 00:45:04.460084 | orchestrator | Thursday 29 May 2025 00:45:04 +0000 (0:00:01.295) 0:00:02.622 ********** 2025-05-29 00:45:08.842509 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:45:08.842564 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:45:08.842569 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:45:08.842573 | orchestrator | ok: [testbed-manager] 2025-05-29 00:45:08.842602 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:45:08.842897 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:45:08.844385 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:45:08.845522 | orchestrator | 2025-05-29 00:45:08.846213 | orchestrator | PLAY [Gather facts for all hosts if using --limit] ***************************** 2025-05-29 00:45:08.847424 | orchestrator | 2025-05-29 00:45:08.848612 | orchestrator | TASK [Gather facts for all hosts] ********************************************** 2025-05-29 00:45:08.849533 | orchestrator | Thursday 29 May 2025 00:45:08 +0000 (0:00:04.391) 0:00:07.013 ********** 2025-05-29 00:45:09.145010 | orchestrator | skipping: [testbed-manager] 2025-05-29 00:45:09.223584 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:45:09.313939 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:45:09.407497 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:45:09.491985 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:45:09.533521 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:45:09.533662 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:45:09.534111 | orchestrator | 2025-05-29 00:45:09.535156 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-29 00:45:09.535200 | orchestrator | 2025-05-29 00:45:09 | INFO  | Play has been completed. There may now be a delay until all logs have been written. 2025-05-29 00:45:09.535214 | orchestrator | 2025-05-29 00:45:09 | INFO  | Please wait and do not abort execution. 2025-05-29 00:45:09.537037 | orchestrator | testbed-manager : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-29 00:45:09.548196 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-29 00:45:09.550272 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-29 00:45:09.551124 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-29 00:45:09.551788 | orchestrator | testbed-node-3 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-29 00:45:09.552550 | orchestrator | testbed-node-4 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-29 00:45:09.552897 | orchestrator | testbed-node-5 : ok=2  changed=0 unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-29 00:45:09.553597 | orchestrator | 2025-05-29 00:45:09.554352 | orchestrator | Thursday 29 May 2025 00:45:09 +0000 (0:00:00.698) 0:00:07.711 ********** 2025-05-29 00:45:09.554618 | orchestrator | =============================================================================== 2025-05-29 00:45:09.555418 | orchestrator | Gathers facts about hosts ----------------------------------------------- 4.39s 2025-05-29 00:45:09.555728 | orchestrator | osism.commons.facts : Copy fact files ----------------------------------- 1.30s 2025-05-29 00:45:09.556257 | orchestrator | osism.commons.facts : Create custom facts directory --------------------- 1.11s 2025-05-29 00:45:09.556587 | orchestrator | Gather facts for all hosts ---------------------------------------------- 0.70s 2025-05-29 00:45:10.102292 | orchestrator | 2025-05-29 00:45:10.106435 | orchestrator | --> DEPLOY IN A NUTSHELL -- START -- Thu May 29 00:45:10 UTC 2025 2025-05-29 00:45:10.106489 | orchestrator | 2025-05-29 00:45:11.483639 | orchestrator | 2025-05-29 00:45:11 | INFO  | Collection nutshell is prepared for execution 2025-05-29 00:45:11.483735 | orchestrator | 2025-05-29 00:45:11 | INFO  | D [0] - dotfiles 2025-05-29 00:45:11.488088 | orchestrator | 2025-05-29 00:45:11 | INFO  | D [0] - homer 2025-05-29 00:45:11.488149 | orchestrator | 2025-05-29 00:45:11 | INFO  | D [0] - netdata 2025-05-29 00:45:11.488163 | orchestrator | 2025-05-29 00:45:11 | INFO  | D [0] - openstackclient 2025-05-29 00:45:11.488809 | orchestrator | 2025-05-29 00:45:11 | INFO  | D [0] - phpmyadmin 2025-05-29 00:45:11.488834 | orchestrator | 2025-05-29 00:45:11 | INFO  | A [0] - common 2025-05-29 00:45:11.489283 | orchestrator | 2025-05-29 00:45:11 | INFO  | A [1] -- loadbalancer 2025-05-29 00:45:11.489725 | orchestrator | 2025-05-29 00:45:11 | INFO  | D [2] --- opensearch 2025-05-29 00:45:11.489747 | orchestrator | 2025-05-29 00:45:11 | INFO  | A [2] --- mariadb-ng 2025-05-29 00:45:11.489759 | orchestrator | 2025-05-29 00:45:11 | INFO  | D [3] ---- horizon 2025-05-29 00:45:11.489770 | orchestrator | 2025-05-29 00:45:11 | INFO  | A [3] ---- keystone 2025-05-29 00:45:11.489781 | orchestrator | 2025-05-29 00:45:11 | INFO  | A [4] ----- neutron 2025-05-29 00:45:11.489792 | orchestrator | 2025-05-29 00:45:11 | INFO  | D [5] ------ wait-for-nova 2025-05-29 00:45:11.489803 | orchestrator | 2025-05-29 00:45:11 | INFO  | A [5] ------ octavia 2025-05-29 00:45:11.490126 | orchestrator | 2025-05-29 00:45:11 | INFO  | D [4] ----- barbican 2025-05-29 00:45:11.490320 | orchestrator | 2025-05-29 00:45:11 | INFO  | D [4] ----- designate 2025-05-29 00:45:11.490340 | orchestrator | 2025-05-29 00:45:11 | INFO  | D [4] ----- ironic 2025-05-29 00:45:11.490352 | orchestrator | 2025-05-29 00:45:11 | INFO  | D [4] ----- placement 2025-05-29 00:45:11.490636 | orchestrator | 2025-05-29 00:45:11 | INFO  | D [4] ----- magnum 2025-05-29 00:45:11.490657 | orchestrator | 2025-05-29 00:45:11 | INFO  | A [1] -- openvswitch 2025-05-29 00:45:11.490669 | orchestrator | 2025-05-29 00:45:11 | INFO  | D [2] --- ovn 2025-05-29 00:45:11.490876 | orchestrator | 2025-05-29 00:45:11 | INFO  | D [1] -- memcached 2025-05-29 00:45:11.490896 | orchestrator | 2025-05-29 00:45:11 | INFO  | D [1] -- redis 2025-05-29 00:45:11.490908 | orchestrator | 2025-05-29 00:45:11 | INFO  | D [1] -- rabbitmq-ng 2025-05-29 00:45:11.491256 | orchestrator | 2025-05-29 00:45:11 | INFO  | A [0] - kubernetes 2025-05-29 00:45:11.491276 | orchestrator | 2025-05-29 00:45:11 | INFO  | D [1] -- kubeconfig 2025-05-29 00:45:11.491288 | orchestrator | 2025-05-29 00:45:11 | INFO  | A [1] -- copy-kubeconfig 2025-05-29 00:45:11.491380 | orchestrator | 2025-05-29 00:45:11 | INFO  | A [0] - ceph 2025-05-29 00:45:11.492777 | orchestrator | 2025-05-29 00:45:11 | INFO  | A [1] -- ceph-pools 2025-05-29 00:45:11.492850 | orchestrator | 2025-05-29 00:45:11 | INFO  | A [2] --- copy-ceph-keys 2025-05-29 00:45:11.492892 | orchestrator | 2025-05-29 00:45:11 | INFO  | A [3] ---- cephclient 2025-05-29 00:45:11.492905 | orchestrator | 2025-05-29 00:45:11 | INFO  | D [4] ----- ceph-bootstrap-dashboard 2025-05-29 00:45:11.492918 | orchestrator | 2025-05-29 00:45:11 | INFO  | A [4] ----- wait-for-keystone 2025-05-29 00:45:11.492929 | orchestrator | 2025-05-29 00:45:11 | INFO  | D [5] ------ kolla-ceph-rgw 2025-05-29 00:45:11.492966 | orchestrator | 2025-05-29 00:45:11 | INFO  | D [5] ------ glance 2025-05-29 00:45:11.493029 | orchestrator | 2025-05-29 00:45:11 | INFO  | D [5] ------ cinder 2025-05-29 00:45:11.493046 | orchestrator | 2025-05-29 00:45:11 | INFO  | D [5] ------ nova 2025-05-29 00:45:11.493467 | orchestrator | 2025-05-29 00:45:11 | INFO  | A [4] ----- prometheus 2025-05-29 00:45:11.493491 | orchestrator | 2025-05-29 00:45:11 | INFO  | D [5] ------ grafana 2025-05-29 00:45:11.614981 | orchestrator | 2025-05-29 00:45:11 | INFO  | All tasks of the collection nutshell are prepared for execution 2025-05-29 00:45:11.615124 | orchestrator | 2025-05-29 00:45:11 | INFO  | Tasks are running in the background 2025-05-29 00:45:13.403955 | orchestrator | 2025-05-29 00:45:13 | INFO  | No task IDs specified, wait for all currently running tasks 2025-05-29 00:45:15.507493 | orchestrator | 2025-05-29 00:45:15 | INFO  | Task d7a6ddfc-d116-49a7-8b24-e6ca743dad71 is in state STARTED 2025-05-29 00:45:15.507733 | orchestrator | 2025-05-29 00:45:15 | INFO  | Task c2ab43e6-0ff6-45f7-b32f-fc712c983458 is in state STARTED 2025-05-29 00:45:15.508163 | orchestrator | 2025-05-29 00:45:15 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:45:15.508781 | orchestrator | 2025-05-29 00:45:15 | INFO  | Task 7efd09ef-3c74-4f5c-8063-c69fd30be199 is in state STARTED 2025-05-29 00:45:15.509221 | orchestrator | 2025-05-29 00:45:15 | INFO  | Task 235d6dd2-0024-4c95-8fa6-215ba96984dc is in state STARTED 2025-05-29 00:45:15.509726 | orchestrator | 2025-05-29 00:45:15 | INFO  | Task 0792f46a-ecd1-4626-93d9-c67567202fd9 is in state STARTED 2025-05-29 00:45:15.509758 | orchestrator | 2025-05-29 00:45:15 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:45:18.585966 | orchestrator | 2025-05-29 00:45:18 | INFO  | Task d7a6ddfc-d116-49a7-8b24-e6ca743dad71 is in state STARTED 2025-05-29 00:45:18.586165 | orchestrator | 2025-05-29 00:45:18 | INFO  | Task c2ab43e6-0ff6-45f7-b32f-fc712c983458 is in state STARTED 2025-05-29 00:45:18.586184 | orchestrator | 2025-05-29 00:45:18 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:45:18.586219 | orchestrator | 2025-05-29 00:45:18 | INFO  | Task 7efd09ef-3c74-4f5c-8063-c69fd30be199 is in state STARTED 2025-05-29 00:45:18.586232 | orchestrator | 2025-05-29 00:45:18 | INFO  | Task 235d6dd2-0024-4c95-8fa6-215ba96984dc is in state STARTED 2025-05-29 00:45:18.586244 | orchestrator | 2025-05-29 00:45:18 | INFO  | Task 0792f46a-ecd1-4626-93d9-c67567202fd9 is in state STARTED 2025-05-29 00:45:18.586256 | orchestrator | 2025-05-29 00:45:18 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:45:21.637326 | orchestrator | 2025-05-29 00:45:21 | INFO  | Task d7a6ddfc-d116-49a7-8b24-e6ca743dad71 is in state STARTED 2025-05-29 00:45:21.637438 | orchestrator | 2025-05-29 00:45:21 | INFO  | Task c2ab43e6-0ff6-45f7-b32f-fc712c983458 is in state STARTED 2025-05-29 00:45:21.640514 | orchestrator | 2025-05-29 00:45:21 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:45:21.640543 | orchestrator | 2025-05-29 00:45:21 | INFO  | Task 7efd09ef-3c74-4f5c-8063-c69fd30be199 is in state STARTED 2025-05-29 00:45:21.644391 | orchestrator | 2025-05-29 00:45:21 | INFO  | Task 235d6dd2-0024-4c95-8fa6-215ba96984dc is in state STARTED 2025-05-29 00:45:21.644758 | orchestrator | 2025-05-29 00:45:21 | INFO  | Task 0792f46a-ecd1-4626-93d9-c67567202fd9 is in state STARTED 2025-05-29 00:45:21.644779 | orchestrator | 2025-05-29 00:45:21 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:45:24.706431 | orchestrator | 2025-05-29 00:45:24 | INFO  | Task d7a6ddfc-d116-49a7-8b24-e6ca743dad71 is in state STARTED 2025-05-29 00:45:24.706530 | orchestrator | 2025-05-29 00:45:24 | INFO  | Task c2ab43e6-0ff6-45f7-b32f-fc712c983458 is in state STARTED 2025-05-29 00:45:24.706825 | orchestrator | 2025-05-29 00:45:24 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:45:24.707444 | orchestrator | 2025-05-29 00:45:24 | INFO  | Task 7efd09ef-3c74-4f5c-8063-c69fd30be199 is in state STARTED 2025-05-29 00:45:24.708387 | orchestrator | 2025-05-29 00:45:24 | INFO  | Task 235d6dd2-0024-4c95-8fa6-215ba96984dc is in state STARTED 2025-05-29 00:45:24.708400 | orchestrator | 2025-05-29 00:45:24 | INFO  | Task 0792f46a-ecd1-4626-93d9-c67567202fd9 is in state STARTED 2025-05-29 00:45:24.708409 | orchestrator | 2025-05-29 00:45:24 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:45:27.764639 | orchestrator | 2025-05-29 00:45:27 | INFO  | Task d7a6ddfc-d116-49a7-8b24-e6ca743dad71 is in state STARTED 2025-05-29 00:45:27.768387 | orchestrator | 2025-05-29 00:45:27 | INFO  | Task c2ab43e6-0ff6-45f7-b32f-fc712c983458 is in state STARTED 2025-05-29 00:45:27.768566 | orchestrator | 2025-05-29 00:45:27 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:45:27.769073 | orchestrator | 2025-05-29 00:45:27 | INFO  | Task 7efd09ef-3c74-4f5c-8063-c69fd30be199 is in state STARTED 2025-05-29 00:45:27.769500 | orchestrator | 2025-05-29 00:45:27 | INFO  | Task 235d6dd2-0024-4c95-8fa6-215ba96984dc is in state STARTED 2025-05-29 00:45:27.770699 | orchestrator | 2025-05-29 00:45:27 | INFO  | Task 0792f46a-ecd1-4626-93d9-c67567202fd9 is in state STARTED 2025-05-29 00:45:27.770724 | orchestrator | 2025-05-29 00:45:27 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:45:30.811336 | orchestrator | 2025-05-29 00:45:30 | INFO  | Task fff4d2b7-a03f-4a95-9020-90f671afff1d is in state STARTED 2025-05-29 00:45:30.811434 | orchestrator | 2025-05-29 00:45:30 | INFO  | Task d7a6ddfc-d116-49a7-8b24-e6ca743dad71 is in state STARTED 2025-05-29 00:45:30.811521 | orchestrator | 2025-05-29 00:45:30 | INFO  | Task c2ab43e6-0ff6-45f7-b32f-fc712c983458 is in state STARTED 2025-05-29 00:45:30.813288 | orchestrator | 2025-05-29 00:45:30 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:45:30.813323 | orchestrator | 2025-05-29 00:45:30 | INFO  | Task 7efd09ef-3c74-4f5c-8063-c69fd30be199 is in state STARTED 2025-05-29 00:45:30.814917 | orchestrator | 2025-05-29 00:45:30.814947 | orchestrator | PLAY [Apply role geerlingguy.dotfiles] ***************************************** 2025-05-29 00:45:30.814961 | orchestrator | 2025-05-29 00:45:30.814973 | orchestrator | TASK [geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally.] **** 2025-05-29 00:45:30.814985 | orchestrator | Thursday 29 May 2025 00:45:19 +0000 (0:00:00.234) 0:00:00.234 ********** 2025-05-29 00:45:30.814997 | orchestrator | changed: [testbed-manager] 2025-05-29 00:45:30.815010 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:45:30.815050 | orchestrator | changed: [testbed-node-1] 2025-05-29 00:45:30.815062 | orchestrator | changed: [testbed-node-2] 2025-05-29 00:45:30.815073 | orchestrator | changed: [testbed-node-3] 2025-05-29 00:45:30.815084 | orchestrator | changed: [testbed-node-4] 2025-05-29 00:45:30.815095 | orchestrator | changed: [testbed-node-5] 2025-05-29 00:45:30.815105 | orchestrator | 2025-05-29 00:45:30.815116 | orchestrator | TASK [geerlingguy.dotfiles : Ensure all configured dotfiles are links.] ******** 2025-05-29 00:45:30.815127 | orchestrator | Thursday 29 May 2025 00:45:22 +0000 (0:00:03.541) 0:00:03.776 ********** 2025-05-29 00:45:30.815139 | orchestrator | ok: [testbed-node-0] => (item=.tmux.conf) 2025-05-29 00:45:30.815150 | orchestrator | ok: [testbed-node-1] => (item=.tmux.conf) 2025-05-29 00:45:30.815161 | orchestrator | ok: [testbed-manager] => (item=.tmux.conf) 2025-05-29 00:45:30.815193 | orchestrator | ok: [testbed-node-2] => (item=.tmux.conf) 2025-05-29 00:45:30.815204 | orchestrator | ok: [testbed-node-3] => (item=.tmux.conf) 2025-05-29 00:45:30.815215 | orchestrator | ok: [testbed-node-4] => (item=.tmux.conf) 2025-05-29 00:45:30.815225 | orchestrator | ok: [testbed-node-5] => (item=.tmux.conf) 2025-05-29 00:45:30.815236 | orchestrator | 2025-05-29 00:45:30.815246 | orchestrator | TASK [geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked.] *** 2025-05-29 00:45:30.815257 | orchestrator | Thursday 29 May 2025 00:45:24 +0000 (0:00:01.844) 0:00:05.620 ********** 2025-05-29 00:45:30.815272 | orchestrator | ok: [testbed-node-0] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-05-29 00:45:23.266434', 'end': '2025-05-29 00:45:23.274538', 'delta': '0:00:00.008104', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-05-29 00:45:30.815300 | orchestrator | ok: [testbed-manager] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-05-29 00:45:23.544007', 'end': '2025-05-29 00:45:23.549500', 'delta': '0:00:00.005493', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-05-29 00:45:30.815313 | orchestrator | ok: [testbed-node-1] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-05-29 00:45:23.496540', 'end': '2025-05-29 00:45:23.502731', 'delta': '0:00:00.006191', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-05-29 00:45:30.815353 | orchestrator | ok: [testbed-node-2] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-05-29 00:45:23.708015', 'end': '2025-05-29 00:45:23.711427', 'delta': '0:00:00.003412', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-05-29 00:45:30.815372 | orchestrator | ok: [testbed-node-3] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-05-29 00:45:23.925625', 'end': '2025-05-29 00:45:23.932943', 'delta': '0:00:00.007318', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-05-29 00:45:30.815404 | orchestrator | ok: [testbed-node-4] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-05-29 00:45:24.123492', 'end': '2025-05-29 00:45:24.130961', 'delta': '0:00:00.007469', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-05-29 00:45:30.815429 | orchestrator | ok: [testbed-node-5] => (item=[0, {'changed': False, 'stdout': '', 'stderr': "ls: cannot access '/home/dragon/.tmux.conf': No such file or directory", 'rc': 2, 'cmd': ['ls', '-F', '~/.tmux.conf'], 'start': '2025-05-29 00:45:24.239050', 'end': '2025-05-29 00:45:24.246405', 'delta': '0:00:00.007355', 'failed': False, 'msg': 'non-zero return code', 'invocation': {'module_args': {'_raw_params': 'ls -F ~/.tmux.conf', '_uses_shell': False, 'expand_argument_vars': True, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': ["ls: cannot access '/home/dragon/.tmux.conf': No such file or directory"], 'failed_when_result': False, 'item': '.tmux.conf', 'ansible_loop_var': 'item'}]) 2025-05-29 00:45:30.815449 | orchestrator | 2025-05-29 00:45:30.815468 | orchestrator | TASK [geerlingguy.dotfiles : Link dotfiles into home folder.] ****************** 2025-05-29 00:45:30.815487 | orchestrator | Thursday 29 May 2025 00:45:26 +0000 (0:00:01.886) 0:00:07.506 ********** 2025-05-29 00:45:30.815505 | orchestrator | changed: [testbed-manager] => (item=.tmux.conf) 2025-05-29 00:45:30.815517 | orchestrator | changed: [testbed-node-0] => (item=.tmux.conf) 2025-05-29 00:45:30.815528 | orchestrator | changed: [testbed-node-1] => (item=.tmux.conf) 2025-05-29 00:45:30.815542 | orchestrator | changed: [testbed-node-2] => (item=.tmux.conf) 2025-05-29 00:45:30.815554 | orchestrator | changed: [testbed-node-3] => (item=.tmux.conf) 2025-05-29 00:45:30.815567 | orchestrator | changed: [testbed-node-4] => (item=.tmux.conf) 2025-05-29 00:45:30.815585 | orchestrator | changed: [testbed-node-5] => (item=.tmux.conf) 2025-05-29 00:45:30.815611 | orchestrator | 2025-05-29 00:45:30.815634 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-29 00:45:30.815652 | orchestrator | testbed-manager : ok=4  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-29 00:45:30.815672 | orchestrator | testbed-node-0 : ok=4  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-29 00:45:30.815690 | orchestrator | testbed-node-1 : ok=4  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-29 00:45:30.815719 | orchestrator | testbed-node-2 : ok=4  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-29 00:45:30.815751 | orchestrator | testbed-node-3 : ok=4  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-29 00:45:30.815770 | orchestrator | testbed-node-4 : ok=4  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-29 00:45:30.815788 | orchestrator | testbed-node-5 : ok=4  changed=2  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-29 00:45:30.815807 | orchestrator | 2025-05-29 00:45:30.815825 | orchestrator | Thursday 29 May 2025 00:45:28 +0000 (0:00:02.273) 0:00:09.780 ********** 2025-05-29 00:45:30.815843 | orchestrator | =============================================================================== 2025-05-29 00:45:30.815862 | orchestrator | geerlingguy.dotfiles : Ensure dotfiles repository is cloned locally. ---- 3.54s 2025-05-29 00:45:30.815881 | orchestrator | geerlingguy.dotfiles : Link dotfiles into home folder. ------------------ 2.27s 2025-05-29 00:45:30.815900 | orchestrator | geerlingguy.dotfiles : Remove existing dotfiles file if a replacement is being linked. --- 1.89s 2025-05-29 00:45:30.815920 | orchestrator | geerlingguy.dotfiles : Ensure all configured dotfiles are links. -------- 1.84s 2025-05-29 00:45:30.815979 | orchestrator | 2025-05-29 00:45:30 | INFO  | Task 235d6dd2-0024-4c95-8fa6-215ba96984dc is in state STARTED 2025-05-29 00:45:30.816000 | orchestrator | 2025-05-29 00:45:30 | INFO  | Task 0792f46a-ecd1-4626-93d9-c67567202fd9 is in state SUCCESS 2025-05-29 00:45:30.816018 | orchestrator | 2025-05-29 00:45:30 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:45:33.871667 | orchestrator | 2025-05-29 00:45:33 | INFO  | Task fff4d2b7-a03f-4a95-9020-90f671afff1d is in state STARTED 2025-05-29 00:45:33.871778 | orchestrator | 2025-05-29 00:45:33 | INFO  | Task d7a6ddfc-d116-49a7-8b24-e6ca743dad71 is in state STARTED 2025-05-29 00:45:33.873387 | orchestrator | 2025-05-29 00:45:33 | INFO  | Task c2ab43e6-0ff6-45f7-b32f-fc712c983458 is in state STARTED 2025-05-29 00:45:33.874360 | orchestrator | 2025-05-29 00:45:33 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:45:33.876310 | orchestrator | 2025-05-29 00:45:33 | INFO  | Task 7efd09ef-3c74-4f5c-8063-c69fd30be199 is in state STARTED 2025-05-29 00:45:33.876341 | orchestrator | 2025-05-29 00:45:33 | INFO  | Task 235d6dd2-0024-4c95-8fa6-215ba96984dc is in state STARTED 2025-05-29 00:45:33.876354 | orchestrator | 2025-05-29 00:45:33 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:45:36.945438 | orchestrator | 2025-05-29 00:45:36 | INFO  | Task fff4d2b7-a03f-4a95-9020-90f671afff1d is in state STARTED 2025-05-29 00:45:36.946828 | orchestrator | 2025-05-29 00:45:36 | INFO  | Task d7a6ddfc-d116-49a7-8b24-e6ca743dad71 is in state STARTED 2025-05-29 00:45:36.949764 | orchestrator | 2025-05-29 00:45:36 | INFO  | Task c2ab43e6-0ff6-45f7-b32f-fc712c983458 is in state STARTED 2025-05-29 00:45:36.949843 | orchestrator | 2025-05-29 00:45:36 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:45:36.951212 | orchestrator | 2025-05-29 00:45:36 | INFO  | Task 7efd09ef-3c74-4f5c-8063-c69fd30be199 is in state STARTED 2025-05-29 00:45:36.952694 | orchestrator | 2025-05-29 00:45:36 | INFO  | Task 235d6dd2-0024-4c95-8fa6-215ba96984dc is in state STARTED 2025-05-29 00:45:36.952825 | orchestrator | 2025-05-29 00:45:36 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:45:39.996858 | orchestrator | 2025-05-29 00:45:39 | INFO  | Task fff4d2b7-a03f-4a95-9020-90f671afff1d is in state STARTED 2025-05-29 00:45:39.996972 | orchestrator | 2025-05-29 00:45:39 | INFO  | Task d7a6ddfc-d116-49a7-8b24-e6ca743dad71 is in state STARTED 2025-05-29 00:45:40.001422 | orchestrator | 2025-05-29 00:45:39 | INFO  | Task c2ab43e6-0ff6-45f7-b32f-fc712c983458 is in state STARTED 2025-05-29 00:45:40.004158 | orchestrator | 2025-05-29 00:45:40 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:45:40.004287 | orchestrator | 2025-05-29 00:45:40 | INFO  | Task 7efd09ef-3c74-4f5c-8063-c69fd30be199 is in state STARTED 2025-05-29 00:45:40.005266 | orchestrator | 2025-05-29 00:45:40 | INFO  | Task 235d6dd2-0024-4c95-8fa6-215ba96984dc is in state STARTED 2025-05-29 00:45:40.006735 | orchestrator | 2025-05-29 00:45:40 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:45:43.067945 | orchestrator | 2025-05-29 00:45:43 | INFO  | Task fff4d2b7-a03f-4a95-9020-90f671afff1d is in state STARTED 2025-05-29 00:45:43.070750 | orchestrator | 2025-05-29 00:45:43 | INFO  | Task d7a6ddfc-d116-49a7-8b24-e6ca743dad71 is in state STARTED 2025-05-29 00:45:43.073252 | orchestrator | 2025-05-29 00:45:43 | INFO  | Task c2ab43e6-0ff6-45f7-b32f-fc712c983458 is in state STARTED 2025-05-29 00:45:43.075360 | orchestrator | 2025-05-29 00:45:43 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:45:43.076940 | orchestrator | 2025-05-29 00:45:43 | INFO  | Task 7efd09ef-3c74-4f5c-8063-c69fd30be199 is in state STARTED 2025-05-29 00:45:43.090218 | orchestrator | 2025-05-29 00:45:43 | INFO  | Task 235d6dd2-0024-4c95-8fa6-215ba96984dc is in state STARTED 2025-05-29 00:45:43.090245 | orchestrator | 2025-05-29 00:45:43 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:45:46.123391 | orchestrator | 2025-05-29 00:45:46 | INFO  | Task fff4d2b7-a03f-4a95-9020-90f671afff1d is in state STARTED 2025-05-29 00:45:46.125122 | orchestrator | 2025-05-29 00:45:46 | INFO  | Task d7a6ddfc-d116-49a7-8b24-e6ca743dad71 is in state STARTED 2025-05-29 00:45:46.126690 | orchestrator | 2025-05-29 00:45:46 | INFO  | Task c2ab43e6-0ff6-45f7-b32f-fc712c983458 is in state STARTED 2025-05-29 00:45:46.128316 | orchestrator | 2025-05-29 00:45:46 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:45:46.132454 | orchestrator | 2025-05-29 00:45:46 | INFO  | Task 7efd09ef-3c74-4f5c-8063-c69fd30be199 is in state STARTED 2025-05-29 00:45:46.133542 | orchestrator | 2025-05-29 00:45:46 | INFO  | Task 235d6dd2-0024-4c95-8fa6-215ba96984dc is in state STARTED 2025-05-29 00:45:46.133557 | orchestrator | 2025-05-29 00:45:46 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:45:49.192929 | orchestrator | 2025-05-29 00:45:49 | INFO  | Task fff4d2b7-a03f-4a95-9020-90f671afff1d is in state STARTED 2025-05-29 00:45:49.193050 | orchestrator | 2025-05-29 00:45:49 | INFO  | Task d7a6ddfc-d116-49a7-8b24-e6ca743dad71 is in state STARTED 2025-05-29 00:45:49.193140 | orchestrator | 2025-05-29 00:45:49 | INFO  | Task c2ab43e6-0ff6-45f7-b32f-fc712c983458 is in state STARTED 2025-05-29 00:45:49.194698 | orchestrator | 2025-05-29 00:45:49 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:45:49.197144 | orchestrator | 2025-05-29 00:45:49 | INFO  | Task 7efd09ef-3c74-4f5c-8063-c69fd30be199 is in state STARTED 2025-05-29 00:45:49.198587 | orchestrator | 2025-05-29 00:45:49 | INFO  | Task 235d6dd2-0024-4c95-8fa6-215ba96984dc is in state STARTED 2025-05-29 00:45:49.199415 | orchestrator | 2025-05-29 00:45:49 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:45:52.254524 | orchestrator | 2025-05-29 00:45:52 | INFO  | Task fff4d2b7-a03f-4a95-9020-90f671afff1d is in state STARTED 2025-05-29 00:45:52.254615 | orchestrator | 2025-05-29 00:45:52 | INFO  | Task d7a6ddfc-d116-49a7-8b24-e6ca743dad71 is in state STARTED 2025-05-29 00:45:52.254630 | orchestrator | 2025-05-29 00:45:52 | INFO  | Task c2ab43e6-0ff6-45f7-b32f-fc712c983458 is in state SUCCESS 2025-05-29 00:45:52.255214 | orchestrator | 2025-05-29 00:45:52 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:45:52.268535 | orchestrator | 2025-05-29 00:45:52 | INFO  | Task 7efd09ef-3c74-4f5c-8063-c69fd30be199 is in state STARTED 2025-05-29 00:45:52.268561 | orchestrator | 2025-05-29 00:45:52 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:45:52.268572 | orchestrator | 2025-05-29 00:45:52 | INFO  | Task 235d6dd2-0024-4c95-8fa6-215ba96984dc is in state STARTED 2025-05-29 00:45:52.268583 | orchestrator | 2025-05-29 00:45:52 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:45:55.320782 | orchestrator | 2025-05-29 00:45:55 | INFO  | Task fff4d2b7-a03f-4a95-9020-90f671afff1d is in state STARTED 2025-05-29 00:45:55.323208 | orchestrator | 2025-05-29 00:45:55 | INFO  | Task d7a6ddfc-d116-49a7-8b24-e6ca743dad71 is in state STARTED 2025-05-29 00:45:55.329778 | orchestrator | 2025-05-29 00:45:55 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:45:55.329825 | orchestrator | 2025-05-29 00:45:55 | INFO  | Task 7efd09ef-3c74-4f5c-8063-c69fd30be199 is in state STARTED 2025-05-29 00:45:55.329836 | orchestrator | 2025-05-29 00:45:55 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:45:55.329847 | orchestrator | 2025-05-29 00:45:55 | INFO  | Task 235d6dd2-0024-4c95-8fa6-215ba96984dc is in state STARTED 2025-05-29 00:45:55.329859 | orchestrator | 2025-05-29 00:45:55 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:45:58.391884 | orchestrator | 2025-05-29 00:45:58 | INFO  | Task fff4d2b7-a03f-4a95-9020-90f671afff1d is in state STARTED 2025-05-29 00:45:58.395608 | orchestrator | 2025-05-29 00:45:58 | INFO  | Task d7a6ddfc-d116-49a7-8b24-e6ca743dad71 is in state STARTED 2025-05-29 00:45:58.396574 | orchestrator | 2025-05-29 00:45:58 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:45:58.399506 | orchestrator | 2025-05-29 00:45:58 | INFO  | Task 7efd09ef-3c74-4f5c-8063-c69fd30be199 is in state STARTED 2025-05-29 00:45:58.402899 | orchestrator | 2025-05-29 00:45:58 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:45:58.406549 | orchestrator | 2025-05-29 00:45:58 | INFO  | Task 235d6dd2-0024-4c95-8fa6-215ba96984dc is in state STARTED 2025-05-29 00:45:58.406624 | orchestrator | 2025-05-29 00:45:58 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:46:01.540335 | orchestrator | 2025-05-29 00:46:01 | INFO  | Task fff4d2b7-a03f-4a95-9020-90f671afff1d is in state STARTED 2025-05-29 00:46:01.541799 | orchestrator | 2025-05-29 00:46:01 | INFO  | Task d7a6ddfc-d116-49a7-8b24-e6ca743dad71 is in state STARTED 2025-05-29 00:46:01.542748 | orchestrator | 2025-05-29 00:46:01 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:46:01.544520 | orchestrator | 2025-05-29 00:46:01 | INFO  | Task 7efd09ef-3c74-4f5c-8063-c69fd30be199 is in state STARTED 2025-05-29 00:46:01.545740 | orchestrator | 2025-05-29 00:46:01 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:46:01.547039 | orchestrator | 2025-05-29 00:46:01 | INFO  | Task 235d6dd2-0024-4c95-8fa6-215ba96984dc is in state STARTED 2025-05-29 00:46:01.547123 | orchestrator | 2025-05-29 00:46:01 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:46:04.619209 | orchestrator | 2025-05-29 00:46:04 | INFO  | Task fff4d2b7-a03f-4a95-9020-90f671afff1d is in state STARTED 2025-05-29 00:46:04.620346 | orchestrator | 2025-05-29 00:46:04 | INFO  | Task d7a6ddfc-d116-49a7-8b24-e6ca743dad71 is in state STARTED 2025-05-29 00:46:04.620545 | orchestrator | 2025-05-29 00:46:04 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:46:04.622628 | orchestrator | 2025-05-29 00:46:04 | INFO  | Task 7efd09ef-3c74-4f5c-8063-c69fd30be199 is in state STARTED 2025-05-29 00:46:04.624894 | orchestrator | 2025-05-29 00:46:04 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:46:04.625290 | orchestrator | 2025-05-29 00:46:04 | INFO  | Task 235d6dd2-0024-4c95-8fa6-215ba96984dc is in state STARTED 2025-05-29 00:46:04.625395 | orchestrator | 2025-05-29 00:46:04 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:46:07.682393 | orchestrator | 2025-05-29 00:46:07 | INFO  | Task fff4d2b7-a03f-4a95-9020-90f671afff1d is in state STARTED 2025-05-29 00:46:07.682513 | orchestrator | 2025-05-29 00:46:07 | INFO  | Task d7a6ddfc-d116-49a7-8b24-e6ca743dad71 is in state SUCCESS 2025-05-29 00:46:07.682529 | orchestrator | 2025-05-29 00:46:07 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:46:07.684350 | orchestrator | 2025-05-29 00:46:07 | INFO  | Task 7efd09ef-3c74-4f5c-8063-c69fd30be199 is in state STARTED 2025-05-29 00:46:07.684387 | orchestrator | 2025-05-29 00:46:07 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:46:07.684407 | orchestrator | 2025-05-29 00:46:07 | INFO  | Task 235d6dd2-0024-4c95-8fa6-215ba96984dc is in state STARTED 2025-05-29 00:46:07.684427 | orchestrator | 2025-05-29 00:46:07 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:46:10.747004 | orchestrator | 2025-05-29 00:46:10 | INFO  | Task fff4d2b7-a03f-4a95-9020-90f671afff1d is in state STARTED 2025-05-29 00:46:10.747640 | orchestrator | 2025-05-29 00:46:10 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:46:10.748871 | orchestrator | 2025-05-29 00:46:10 | INFO  | Task 7efd09ef-3c74-4f5c-8063-c69fd30be199 is in state STARTED 2025-05-29 00:46:10.753252 | orchestrator | 2025-05-29 00:46:10 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:46:10.753761 | orchestrator | 2025-05-29 00:46:10 | INFO  | Task 235d6dd2-0024-4c95-8fa6-215ba96984dc is in state STARTED 2025-05-29 00:46:10.753782 | orchestrator | 2025-05-29 00:46:10 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:46:13.797725 | orchestrator | 2025-05-29 00:46:13 | INFO  | Task fff4d2b7-a03f-4a95-9020-90f671afff1d is in state STARTED 2025-05-29 00:46:13.800788 | orchestrator | 2025-05-29 00:46:13 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:46:13.803656 | orchestrator | 2025-05-29 00:46:13 | INFO  | Task 7efd09ef-3c74-4f5c-8063-c69fd30be199 is in state STARTED 2025-05-29 00:46:13.804748 | orchestrator | 2025-05-29 00:46:13 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:46:13.805183 | orchestrator | 2025-05-29 00:46:13 | INFO  | Task 235d6dd2-0024-4c95-8fa6-215ba96984dc is in state STARTED 2025-05-29 00:46:13.805207 | orchestrator | 2025-05-29 00:46:13 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:46:16.851226 | orchestrator | 2025-05-29 00:46:16 | INFO  | Task fff4d2b7-a03f-4a95-9020-90f671afff1d is in state STARTED 2025-05-29 00:46:16.852920 | orchestrator | 2025-05-29 00:46:16 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:46:16.854132 | orchestrator | 2025-05-29 00:46:16 | INFO  | Task 7efd09ef-3c74-4f5c-8063-c69fd30be199 is in state STARTED 2025-05-29 00:46:16.854781 | orchestrator | 2025-05-29 00:46:16 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:46:16.856809 | orchestrator | 2025-05-29 00:46:16 | INFO  | Task 235d6dd2-0024-4c95-8fa6-215ba96984dc is in state STARTED 2025-05-29 00:46:16.856827 | orchestrator | 2025-05-29 00:46:16 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:46:19.901423 | orchestrator | 2025-05-29 00:46:19 | INFO  | Task fff4d2b7-a03f-4a95-9020-90f671afff1d is in state STARTED 2025-05-29 00:46:19.907557 | orchestrator | 2025-05-29 00:46:19 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:46:19.914462 | orchestrator | 2025-05-29 00:46:19 | INFO  | Task 7efd09ef-3c74-4f5c-8063-c69fd30be199 is in state STARTED 2025-05-29 00:46:19.918251 | orchestrator | 2025-05-29 00:46:19 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:46:19.920735 | orchestrator | 2025-05-29 00:46:19 | INFO  | Task 235d6dd2-0024-4c95-8fa6-215ba96984dc is in state STARTED 2025-05-29 00:46:19.920816 | orchestrator | 2025-05-29 00:46:19 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:46:22.981723 | orchestrator | 2025-05-29 00:46:22 | INFO  | Task fff4d2b7-a03f-4a95-9020-90f671afff1d is in state STARTED 2025-05-29 00:46:22.981835 | orchestrator | 2025-05-29 00:46:22 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:46:22.981851 | orchestrator | 2025-05-29 00:46:22 | INFO  | Task 7efd09ef-3c74-4f5c-8063-c69fd30be199 is in state STARTED 2025-05-29 00:46:22.981863 | orchestrator | 2025-05-29 00:46:22 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:46:22.982220 | orchestrator | 2025-05-29 00:46:22 | INFO  | Task 235d6dd2-0024-4c95-8fa6-215ba96984dc is in state STARTED 2025-05-29 00:46:22.982299 | orchestrator | 2025-05-29 00:46:22 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:46:26.033848 | orchestrator | 2025-05-29 00:46:26 | INFO  | Task fff4d2b7-a03f-4a95-9020-90f671afff1d is in state STARTED 2025-05-29 00:46:26.034107 | orchestrator | 2025-05-29 00:46:26 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:46:26.034723 | orchestrator | 2025-05-29 00:46:26 | INFO  | Task 7efd09ef-3c74-4f5c-8063-c69fd30be199 is in state STARTED 2025-05-29 00:46:26.035710 | orchestrator | 2025-05-29 00:46:26 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:46:26.036753 | orchestrator | 2025-05-29 00:46:26 | INFO  | Task 235d6dd2-0024-4c95-8fa6-215ba96984dc is in state SUCCESS 2025-05-29 00:46:26.036804 | orchestrator | 2025-05-29 00:46:26 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:46:26.037902 | orchestrator | 2025-05-29 00:46:26.037949 | orchestrator | 2025-05-29 00:46:26.037960 | orchestrator | PLAY [Apply role homer] ******************************************************** 2025-05-29 00:46:26.037969 | orchestrator | 2025-05-29 00:46:26.037978 | orchestrator | TASK [osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards] *** 2025-05-29 00:46:26.037987 | orchestrator | Thursday 29 May 2025 00:45:16 +0000 (0:00:00.294) 0:00:00.294 ********** 2025-05-29 00:46:26.037996 | orchestrator | ok: [testbed-manager] => { 2025-05-29 00:46:26.038006 | orchestrator |  "msg": "The support for the homer_url_kibana has been removed. Please use the homer_url_opensearch_dashboards parameter." 2025-05-29 00:46:26.038052 | orchestrator | } 2025-05-29 00:46:26.038064 | orchestrator | 2025-05-29 00:46:26.038073 | orchestrator | TASK [osism.services.homer : Create traefik external network] ****************** 2025-05-29 00:46:26.038082 | orchestrator | Thursday 29 May 2025 00:45:17 +0000 (0:00:00.312) 0:00:00.606 ********** 2025-05-29 00:46:26.038091 | orchestrator | ok: [testbed-manager] 2025-05-29 00:46:26.038101 | orchestrator | 2025-05-29 00:46:26.038109 | orchestrator | TASK [osism.services.homer : Create required directories] ********************** 2025-05-29 00:46:26.038134 | orchestrator | Thursday 29 May 2025 00:45:18 +0000 (0:00:01.025) 0:00:01.631 ********** 2025-05-29 00:46:26.038143 | orchestrator | changed: [testbed-manager] => (item=/opt/homer/configuration) 2025-05-29 00:46:26.038152 | orchestrator | ok: [testbed-manager] => (item=/opt/homer) 2025-05-29 00:46:26.038161 | orchestrator | 2025-05-29 00:46:26.038170 | orchestrator | TASK [osism.services.homer : Copy config.yml configuration file] *************** 2025-05-29 00:46:26.038178 | orchestrator | Thursday 29 May 2025 00:45:18 +0000 (0:00:00.717) 0:00:02.349 ********** 2025-05-29 00:46:26.038187 | orchestrator | changed: [testbed-manager] 2025-05-29 00:46:26.038196 | orchestrator | 2025-05-29 00:46:26.038204 | orchestrator | TASK [osism.services.homer : Copy docker-compose.yml file] ********************* 2025-05-29 00:46:26.038213 | orchestrator | Thursday 29 May 2025 00:45:21 +0000 (0:00:02.220) 0:00:04.570 ********** 2025-05-29 00:46:26.038222 | orchestrator | changed: [testbed-manager] 2025-05-29 00:46:26.038230 | orchestrator | 2025-05-29 00:46:26.038239 | orchestrator | TASK [osism.services.homer : Manage homer service] ***************************** 2025-05-29 00:46:26.038252 | orchestrator | Thursday 29 May 2025 00:45:22 +0000 (0:00:01.329) 0:00:05.900 ********** 2025-05-29 00:46:26.038262 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage homer service (10 retries left). 2025-05-29 00:46:26.038270 | orchestrator | ok: [testbed-manager] 2025-05-29 00:46:26.038279 | orchestrator | 2025-05-29 00:46:26.038288 | orchestrator | RUNNING HANDLER [osism.services.homer : Restart homer service] ***************** 2025-05-29 00:46:26.038297 | orchestrator | Thursday 29 May 2025 00:45:48 +0000 (0:00:25.499) 0:00:31.399 ********** 2025-05-29 00:46:26.038305 | orchestrator | changed: [testbed-manager] 2025-05-29 00:46:26.038314 | orchestrator | 2025-05-29 00:46:26.038322 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-29 00:46:26.038331 | orchestrator | testbed-manager : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-29 00:46:26.038341 | orchestrator | 2025-05-29 00:46:26.038350 | orchestrator | Thursday 29 May 2025 00:45:50 +0000 (0:00:02.296) 0:00:33.696 ********** 2025-05-29 00:46:26.038359 | orchestrator | =============================================================================== 2025-05-29 00:46:26.038368 | orchestrator | osism.services.homer : Manage homer service ---------------------------- 25.50s 2025-05-29 00:46:26.038377 | orchestrator | osism.services.homer : Restart homer service ---------------------------- 2.30s 2025-05-29 00:46:26.038385 | orchestrator | osism.services.homer : Copy config.yml configuration file --------------- 2.22s 2025-05-29 00:46:26.038394 | orchestrator | osism.services.homer : Copy docker-compose.yml file --------------------- 1.33s 2025-05-29 00:46:26.038402 | orchestrator | osism.services.homer : Create traefik external network ------------------ 1.03s 2025-05-29 00:46:26.038411 | orchestrator | osism.services.homer : Create required directories ---------------------- 0.72s 2025-05-29 00:46:26.038420 | orchestrator | osism.services.homer : Inform about new parameter homer_url_opensearch_dashboards --- 0.31s 2025-05-29 00:46:26.038428 | orchestrator | 2025-05-29 00:46:26.038437 | orchestrator | 2025-05-29 00:46:26.038445 | orchestrator | PLAY [Apply role openstackclient] ********************************************** 2025-05-29 00:46:26.038454 | orchestrator | 2025-05-29 00:46:26.038463 | orchestrator | TASK [osism.services.openstackclient : Include tasks] ************************** 2025-05-29 00:46:26.038472 | orchestrator | Thursday 29 May 2025 00:45:18 +0000 (0:00:00.241) 0:00:00.241 ********** 2025-05-29 00:46:26.038480 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/openstackclient/tasks/container-Debian-family.yml for testbed-manager 2025-05-29 00:46:26.038490 | orchestrator | 2025-05-29 00:46:26.038499 | orchestrator | TASK [osism.services.openstackclient : Create required directories] ************ 2025-05-29 00:46:26.038509 | orchestrator | Thursday 29 May 2025 00:45:19 +0000 (0:00:00.291) 0:00:00.532 ********** 2025-05-29 00:46:26.038519 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/openstack) 2025-05-29 00:46:26.038529 | orchestrator | changed: [testbed-manager] => (item=/opt/openstackclient/data) 2025-05-29 00:46:26.038544 | orchestrator | ok: [testbed-manager] => (item=/opt/openstackclient) 2025-05-29 00:46:26.038554 | orchestrator | 2025-05-29 00:46:26.038565 | orchestrator | TASK [osism.services.openstackclient : Copy docker-compose.yml file] *********** 2025-05-29 00:46:26.038575 | orchestrator | Thursday 29 May 2025 00:45:20 +0000 (0:00:01.506) 0:00:02.039 ********** 2025-05-29 00:46:26.038585 | orchestrator | changed: [testbed-manager] 2025-05-29 00:46:26.038596 | orchestrator | 2025-05-29 00:46:26.038606 | orchestrator | TASK [osism.services.openstackclient : Manage openstackclient service] ********* 2025-05-29 00:46:26.038616 | orchestrator | Thursday 29 May 2025 00:45:22 +0000 (0:00:01.473) 0:00:03.512 ********** 2025-05-29 00:46:26.038627 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage openstackclient service (10 retries left). 2025-05-29 00:46:26.038637 | orchestrator | ok: [testbed-manager] 2025-05-29 00:46:26.038648 | orchestrator | 2025-05-29 00:46:26.038668 | orchestrator | TASK [osism.services.openstackclient : Copy openstack wrapper script] ********** 2025-05-29 00:46:26.038679 | orchestrator | Thursday 29 May 2025 00:45:59 +0000 (0:00:37.007) 0:00:40.519 ********** 2025-05-29 00:46:26.038689 | orchestrator | changed: [testbed-manager] 2025-05-29 00:46:26.038699 | orchestrator | 2025-05-29 00:46:26.038709 | orchestrator | TASK [osism.services.openstackclient : Remove ospurge wrapper script] ********** 2025-05-29 00:46:26.038719 | orchestrator | Thursday 29 May 2025 00:46:00 +0000 (0:00:01.309) 0:00:41.829 ********** 2025-05-29 00:46:26.038728 | orchestrator | ok: [testbed-manager] 2025-05-29 00:46:26.038738 | orchestrator | 2025-05-29 00:46:26.038748 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Restart openstackclient service] *** 2025-05-29 00:46:26.038758 | orchestrator | Thursday 29 May 2025 00:46:01 +0000 (0:00:01.032) 0:00:42.862 ********** 2025-05-29 00:46:26.038768 | orchestrator | changed: [testbed-manager] 2025-05-29 00:46:26.038778 | orchestrator | 2025-05-29 00:46:26.038788 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Ensure that all containers are up] *** 2025-05-29 00:46:26.038798 | orchestrator | Thursday 29 May 2025 00:46:04 +0000 (0:00:02.588) 0:00:45.450 ********** 2025-05-29 00:46:26.038808 | orchestrator | changed: [testbed-manager] 2025-05-29 00:46:26.038817 | orchestrator | 2025-05-29 00:46:26.038827 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Wait for an healthy service] *** 2025-05-29 00:46:26.038837 | orchestrator | Thursday 29 May 2025 00:46:05 +0000 (0:00:01.605) 0:00:47.055 ********** 2025-05-29 00:46:26.038848 | orchestrator | changed: [testbed-manager] 2025-05-29 00:46:26.038858 | orchestrator | 2025-05-29 00:46:26.038866 | orchestrator | RUNNING HANDLER [osism.services.openstackclient : Copy bash completion script] *** 2025-05-29 00:46:26.038875 | orchestrator | Thursday 29 May 2025 00:46:06 +0000 (0:00:00.789) 0:00:47.845 ********** 2025-05-29 00:46:26.038883 | orchestrator | ok: [testbed-manager] 2025-05-29 00:46:26.038892 | orchestrator | 2025-05-29 00:46:26.038900 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-29 00:46:26.038909 | orchestrator | testbed-manager : ok=10  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-29 00:46:26.038918 | orchestrator | 2025-05-29 00:46:26.038943 | orchestrator | Thursday 29 May 2025 00:46:06 +0000 (0:00:00.423) 0:00:48.269 ********** 2025-05-29 00:46:26.038953 | orchestrator | =============================================================================== 2025-05-29 00:46:26.038961 | orchestrator | osism.services.openstackclient : Manage openstackclient service -------- 37.01s 2025-05-29 00:46:26.038970 | orchestrator | osism.services.openstackclient : Restart openstackclient service -------- 2.58s 2025-05-29 00:46:26.038978 | orchestrator | osism.services.openstackclient : Ensure that all containers are up ------ 1.61s 2025-05-29 00:46:26.038987 | orchestrator | osism.services.openstackclient : Create required directories ------------ 1.51s 2025-05-29 00:46:26.038995 | orchestrator | osism.services.openstackclient : Copy docker-compose.yml file ----------- 1.47s 2025-05-29 00:46:26.039004 | orchestrator | osism.services.openstackclient : Copy openstack wrapper script ---------- 1.31s 2025-05-29 00:46:26.039012 | orchestrator | osism.services.openstackclient : Remove ospurge wrapper script ---------- 1.04s 2025-05-29 00:46:26.039026 | orchestrator | osism.services.openstackclient : Wait for an healthy service ------------ 0.79s 2025-05-29 00:46:26.039035 | orchestrator | osism.services.openstackclient : Copy bash completion script ------------ 0.42s 2025-05-29 00:46:26.039043 | orchestrator | osism.services.openstackclient : Include tasks -------------------------- 0.29s 2025-05-29 00:46:26.039052 | orchestrator | 2025-05-29 00:46:26.039060 | orchestrator | 2025-05-29 00:46:26.039069 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-29 00:46:26.039077 | orchestrator | 2025-05-29 00:46:26.039086 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-29 00:46:26.039095 | orchestrator | Thursday 29 May 2025 00:45:18 +0000 (0:00:00.271) 0:00:00.271 ********** 2025-05-29 00:46:26.039103 | orchestrator | changed: [testbed-manager] => (item=enable_netdata_True) 2025-05-29 00:46:26.039112 | orchestrator | changed: [testbed-node-0] => (item=enable_netdata_True) 2025-05-29 00:46:26.039120 | orchestrator | changed: [testbed-node-1] => (item=enable_netdata_True) 2025-05-29 00:46:26.039129 | orchestrator | changed: [testbed-node-2] => (item=enable_netdata_True) 2025-05-29 00:46:26.039137 | orchestrator | changed: [testbed-node-3] => (item=enable_netdata_True) 2025-05-29 00:46:26.039146 | orchestrator | changed: [testbed-node-4] => (item=enable_netdata_True) 2025-05-29 00:46:26.039154 | orchestrator | changed: [testbed-node-5] => (item=enable_netdata_True) 2025-05-29 00:46:26.039163 | orchestrator | 2025-05-29 00:46:26.039171 | orchestrator | PLAY [Apply role netdata] ****************************************************** 2025-05-29 00:46:26.039180 | orchestrator | 2025-05-29 00:46:26.039189 | orchestrator | TASK [osism.services.netdata : Include distribution specific install tasks] **** 2025-05-29 00:46:26.039197 | orchestrator | Thursday 29 May 2025 00:45:20 +0000 (0:00:01.580) 0:00:01.852 ********** 2025-05-29 00:46:26.039216 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/install-Debian-family.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-29 00:46:26.039226 | orchestrator | 2025-05-29 00:46:26.039235 | orchestrator | TASK [osism.services.netdata : Remove old architecture-dependent repository] *** 2025-05-29 00:46:26.039244 | orchestrator | Thursday 29 May 2025 00:45:22 +0000 (0:00:01.878) 0:00:03.730 ********** 2025-05-29 00:46:26.039252 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:46:26.039261 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:46:26.039270 | orchestrator | ok: [testbed-manager] 2025-05-29 00:46:26.039278 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:46:26.039287 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:46:26.039295 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:46:26.039304 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:46:26.039313 | orchestrator | 2025-05-29 00:46:26.039322 | orchestrator | TASK [osism.services.netdata : Install apt-transport-https package] ************ 2025-05-29 00:46:26.039335 | orchestrator | Thursday 29 May 2025 00:45:24 +0000 (0:00:02.085) 0:00:05.817 ********** 2025-05-29 00:46:26.039344 | orchestrator | ok: [testbed-manager] 2025-05-29 00:46:26.039353 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:46:26.039361 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:46:26.039370 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:46:26.039378 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:46:26.039387 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:46:26.039395 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:46:26.039404 | orchestrator | 2025-05-29 00:46:26.039412 | orchestrator | TASK [osism.services.netdata : Add repository gpg key] ************************* 2025-05-29 00:46:26.039421 | orchestrator | Thursday 29 May 2025 00:45:27 +0000 (0:00:02.678) 0:00:08.496 ********** 2025-05-29 00:46:26.039429 | orchestrator | changed: [testbed-manager] 2025-05-29 00:46:26.039438 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:46:26.039447 | orchestrator | changed: [testbed-node-1] 2025-05-29 00:46:26.039455 | orchestrator | changed: [testbed-node-2] 2025-05-29 00:46:26.039464 | orchestrator | changed: [testbed-node-3] 2025-05-29 00:46:26.039472 | orchestrator | changed: [testbed-node-4] 2025-05-29 00:46:26.039485 | orchestrator | changed: [testbed-node-5] 2025-05-29 00:46:26.039494 | orchestrator | 2025-05-29 00:46:26.039503 | orchestrator | TASK [osism.services.netdata : Add repository] ********************************* 2025-05-29 00:46:26.039511 | orchestrator | Thursday 29 May 2025 00:45:29 +0000 (0:00:02.152) 0:00:10.649 ********** 2025-05-29 00:46:26.039520 | orchestrator | changed: [testbed-manager] 2025-05-29 00:46:26.039529 | orchestrator | changed: [testbed-node-4] 2025-05-29 00:46:26.039537 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:46:26.039546 | orchestrator | changed: [testbed-node-1] 2025-05-29 00:46:26.039554 | orchestrator | changed: [testbed-node-2] 2025-05-29 00:46:26.039563 | orchestrator | changed: [testbed-node-3] 2025-05-29 00:46:26.039571 | orchestrator | changed: [testbed-node-5] 2025-05-29 00:46:26.039580 | orchestrator | 2025-05-29 00:46:26.039588 | orchestrator | TASK [osism.services.netdata : Install package netdata] ************************ 2025-05-29 00:46:26.039597 | orchestrator | Thursday 29 May 2025 00:45:39 +0000 (0:00:10.069) 0:00:20.718 ********** 2025-05-29 00:46:26.039605 | orchestrator | changed: [testbed-node-4] 2025-05-29 00:46:26.039614 | orchestrator | changed: [testbed-node-2] 2025-05-29 00:46:26.039622 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:46:26.039637 | orchestrator | changed: [testbed-node-1] 2025-05-29 00:46:26.039646 | orchestrator | changed: [testbed-node-5] 2025-05-29 00:46:26.039655 | orchestrator | changed: [testbed-node-3] 2025-05-29 00:46:26.039663 | orchestrator | changed: [testbed-manager] 2025-05-29 00:46:26.039672 | orchestrator | 2025-05-29 00:46:26.039680 | orchestrator | TASK [osism.services.netdata : Include config tasks] *************************** 2025-05-29 00:46:26.039689 | orchestrator | Thursday 29 May 2025 00:45:57 +0000 (0:00:18.103) 0:00:38.821 ********** 2025-05-29 00:46:26.039698 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/config.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-29 00:46:26.039707 | orchestrator | 2025-05-29 00:46:26.039716 | orchestrator | TASK [osism.services.netdata : Copy configuration files] *********************** 2025-05-29 00:46:26.039725 | orchestrator | Thursday 29 May 2025 00:46:00 +0000 (0:00:03.112) 0:00:41.934 ********** 2025-05-29 00:46:26.039733 | orchestrator | changed: [testbed-manager] => (item=netdata.conf) 2025-05-29 00:46:26.039742 | orchestrator | changed: [testbed-node-1] => (item=netdata.conf) 2025-05-29 00:46:26.039751 | orchestrator | changed: [testbed-node-0] => (item=netdata.conf) 2025-05-29 00:46:26.039759 | orchestrator | changed: [testbed-node-3] => (item=netdata.conf) 2025-05-29 00:46:26.039768 | orchestrator | changed: [testbed-node-2] => (item=netdata.conf) 2025-05-29 00:46:26.039776 | orchestrator | changed: [testbed-node-4] => (item=netdata.conf) 2025-05-29 00:46:26.039785 | orchestrator | changed: [testbed-node-5] => (item=netdata.conf) 2025-05-29 00:46:26.039793 | orchestrator | changed: [testbed-node-0] => (item=stream.conf) 2025-05-29 00:46:26.039802 | orchestrator | changed: [testbed-node-3] => (item=stream.conf) 2025-05-29 00:46:26.039810 | orchestrator | changed: [testbed-manager] => (item=stream.conf) 2025-05-29 00:46:26.039819 | orchestrator | changed: [testbed-node-1] => (item=stream.conf) 2025-05-29 00:46:26.039827 | orchestrator | changed: [testbed-node-4] => (item=stream.conf) 2025-05-29 00:46:26.039836 | orchestrator | changed: [testbed-node-5] => (item=stream.conf) 2025-05-29 00:46:26.039844 | orchestrator | changed: [testbed-node-2] => (item=stream.conf) 2025-05-29 00:46:26.039853 | orchestrator | 2025-05-29 00:46:26.039861 | orchestrator | TASK [osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status] *** 2025-05-29 00:46:26.039870 | orchestrator | Thursday 29 May 2025 00:46:06 +0000 (0:00:06.267) 0:00:48.201 ********** 2025-05-29 00:46:26.039879 | orchestrator | ok: [testbed-manager] 2025-05-29 00:46:26.039888 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:46:26.039896 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:46:26.039905 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:46:26.039913 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:46:26.039926 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:46:26.039948 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:46:26.039957 | orchestrator | 2025-05-29 00:46:26.039965 | orchestrator | TASK [osism.services.netdata : Opt out from anonymous statistics] ************** 2025-05-29 00:46:26.039974 | orchestrator | Thursday 29 May 2025 00:46:08 +0000 (0:00:01.483) 0:00:49.685 ********** 2025-05-29 00:46:26.039983 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:46:26.039991 | orchestrator | changed: [testbed-manager] 2025-05-29 00:46:26.040000 | orchestrator | changed: [testbed-node-1] 2025-05-29 00:46:26.040009 | orchestrator | changed: [testbed-node-2] 2025-05-29 00:46:26.040017 | orchestrator | changed: [testbed-node-3] 2025-05-29 00:46:26.040026 | orchestrator | changed: [testbed-node-4] 2025-05-29 00:46:26.040035 | orchestrator | changed: [testbed-node-5] 2025-05-29 00:46:26.040043 | orchestrator | 2025-05-29 00:46:26.040052 | orchestrator | TASK [osism.services.netdata : Add netdata user to docker group] *************** 2025-05-29 00:46:26.040061 | orchestrator | Thursday 29 May 2025 00:46:11 +0000 (0:00:02.745) 0:00:52.430 ********** 2025-05-29 00:46:26.040069 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:46:26.040078 | orchestrator | ok: [testbed-manager] 2025-05-29 00:46:26.040087 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:46:26.040095 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:46:26.040108 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:46:26.040117 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:46:26.040126 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:46:26.040134 | orchestrator | 2025-05-29 00:46:26.040143 | orchestrator | TASK [osism.services.netdata : Manage service netdata] ************************* 2025-05-29 00:46:26.040152 | orchestrator | Thursday 29 May 2025 00:46:13 +0000 (0:00:01.979) 0:00:54.410 ********** 2025-05-29 00:46:26.040160 | orchestrator | ok: [testbed-manager] 2025-05-29 00:46:26.040169 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:46:26.040177 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:46:26.040186 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:46:26.040194 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:46:26.040203 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:46:26.040211 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:46:26.040220 | orchestrator | 2025-05-29 00:46:26.040229 | orchestrator | TASK [osism.services.netdata : Include host type specific tasks] *************** 2025-05-29 00:46:26.040238 | orchestrator | Thursday 29 May 2025 00:46:15 +0000 (0:00:02.107) 0:00:56.517 ********** 2025-05-29 00:46:26.040246 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/server.yml for testbed-manager 2025-05-29 00:46:26.040256 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/netdata/tasks/client.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-29 00:46:26.040265 | orchestrator | 2025-05-29 00:46:26.040274 | orchestrator | TASK [osism.services.netdata : Set sysctl vm.max_map_count parameter] ********** 2025-05-29 00:46:26.040282 | orchestrator | Thursday 29 May 2025 00:46:16 +0000 (0:00:01.710) 0:00:58.228 ********** 2025-05-29 00:46:26.040291 | orchestrator | changed: [testbed-manager] 2025-05-29 00:46:26.040300 | orchestrator | 2025-05-29 00:46:26.040308 | orchestrator | RUNNING HANDLER [osism.services.netdata : Restart service netdata] ************* 2025-05-29 00:46:26.040317 | orchestrator | Thursday 29 May 2025 00:46:19 +0000 (0:00:02.091) 0:01:00.320 ********** 2025-05-29 00:46:26.040325 | orchestrator | changed: [testbed-manager] 2025-05-29 00:46:26.040334 | orchestrator | changed: [testbed-node-1] 2025-05-29 00:46:26.040343 | orchestrator | changed: [testbed-node-2] 2025-05-29 00:46:26.040351 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:46:26.040360 | orchestrator | changed: [testbed-node-3] 2025-05-29 00:46:26.040368 | orchestrator | changed: [testbed-node-4] 2025-05-29 00:46:26.040377 | orchestrator | changed: [testbed-node-5] 2025-05-29 00:46:26.040385 | orchestrator | 2025-05-29 00:46:26.040394 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-29 00:46:26.040403 | orchestrator | testbed-manager : ok=16  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-29 00:46:26.040417 | orchestrator | testbed-node-0 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-29 00:46:26.040426 | orchestrator | testbed-node-1 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-29 00:46:26.040435 | orchestrator | testbed-node-2 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-29 00:46:26.040444 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-29 00:46:26.040452 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-29 00:46:26.040461 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-29 00:46:26.040470 | orchestrator | 2025-05-29 00:46:26.040478 | orchestrator | Thursday 29 May 2025 00:46:22 +0000 (0:00:03.293) 0:01:03.614 ********** 2025-05-29 00:46:26.040487 | orchestrator | =============================================================================== 2025-05-29 00:46:26.040496 | orchestrator | osism.services.netdata : Install package netdata ----------------------- 18.10s 2025-05-29 00:46:26.040504 | orchestrator | osism.services.netdata : Add repository -------------------------------- 10.07s 2025-05-29 00:46:26.040513 | orchestrator | osism.services.netdata : Copy configuration files ----------------------- 6.27s 2025-05-29 00:46:26.040522 | orchestrator | osism.services.netdata : Restart service netdata ------------------------ 3.29s 2025-05-29 00:46:26.040530 | orchestrator | osism.services.netdata : Include config tasks --------------------------- 3.11s 2025-05-29 00:46:26.040539 | orchestrator | osism.services.netdata : Opt out from anonymous statistics -------------- 2.75s 2025-05-29 00:46:26.040547 | orchestrator | osism.services.netdata : Install apt-transport-https package ------------ 2.68s 2025-05-29 00:46:26.040556 | orchestrator | osism.services.netdata : Add repository gpg key ------------------------- 2.15s 2025-05-29 00:46:26.040564 | orchestrator | osism.services.netdata : Manage service netdata ------------------------- 2.11s 2025-05-29 00:46:26.040573 | orchestrator | osism.services.netdata : Set sysctl vm.max_map_count parameter ---------- 2.09s 2025-05-29 00:46:26.040581 | orchestrator | osism.services.netdata : Remove old architecture-dependent repository --- 2.09s 2025-05-29 00:46:26.040590 | orchestrator | osism.services.netdata : Add netdata user to docker group --------------- 1.98s 2025-05-29 00:46:26.040598 | orchestrator | osism.services.netdata : Include distribution specific install tasks ---- 1.88s 2025-05-29 00:46:26.040607 | orchestrator | osism.services.netdata : Include host type specific tasks --------------- 1.71s 2025-05-29 00:46:26.040620 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.58s 2025-05-29 00:46:26.040629 | orchestrator | osism.services.netdata : Retrieve /etc/netdata/.opt-out-from-anonymous-statistics status --- 1.48s 2025-05-29 00:46:29.080442 | orchestrator | 2025-05-29 00:46:29 | INFO  | Task fff4d2b7-a03f-4a95-9020-90f671afff1d is in state STARTED 2025-05-29 00:46:29.083813 | orchestrator | 2025-05-29 00:46:29 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:46:29.084663 | orchestrator | 2025-05-29 00:46:29 | INFO  | Task 7efd09ef-3c74-4f5c-8063-c69fd30be199 is in state STARTED 2025-05-29 00:46:29.088082 | orchestrator | 2025-05-29 00:46:29 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:46:29.088131 | orchestrator | 2025-05-29 00:46:29 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:46:32.147893 | orchestrator | 2025-05-29 00:46:32 | INFO  | Task fff4d2b7-a03f-4a95-9020-90f671afff1d is in state STARTED 2025-05-29 00:46:32.148222 | orchestrator | 2025-05-29 00:46:32 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:46:32.149303 | orchestrator | 2025-05-29 00:46:32 | INFO  | Task 7efd09ef-3c74-4f5c-8063-c69fd30be199 is in state STARTED 2025-05-29 00:46:32.150701 | orchestrator | 2025-05-29 00:46:32 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:46:32.150732 | orchestrator | 2025-05-29 00:46:32 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:46:35.209202 | orchestrator | 2025-05-29 00:46:35 | INFO  | Task fff4d2b7-a03f-4a95-9020-90f671afff1d is in state STARTED 2025-05-29 00:46:35.209365 | orchestrator | 2025-05-29 00:46:35 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:46:35.209527 | orchestrator | 2025-05-29 00:46:35 | INFO  | Task 7efd09ef-3c74-4f5c-8063-c69fd30be199 is in state STARTED 2025-05-29 00:46:35.209657 | orchestrator | 2025-05-29 00:46:35 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:46:35.209675 | orchestrator | 2025-05-29 00:46:35 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:46:38.267782 | orchestrator | 2025-05-29 00:46:38 | INFO  | Task fff4d2b7-a03f-4a95-9020-90f671afff1d is in state SUCCESS 2025-05-29 00:46:38.270477 | orchestrator | 2025-05-29 00:46:38 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:46:38.271174 | orchestrator | 2025-05-29 00:46:38 | INFO  | Task 7efd09ef-3c74-4f5c-8063-c69fd30be199 is in state STARTED 2025-05-29 00:46:38.272356 | orchestrator | 2025-05-29 00:46:38 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:46:38.272381 | orchestrator | 2025-05-29 00:46:38 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:46:41.309672 | orchestrator | 2025-05-29 00:46:41 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:46:41.311148 | orchestrator | 2025-05-29 00:46:41 | INFO  | Task 7efd09ef-3c74-4f5c-8063-c69fd30be199 is in state STARTED 2025-05-29 00:46:41.311945 | orchestrator | 2025-05-29 00:46:41 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:46:41.312244 | orchestrator | 2025-05-29 00:46:41 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:46:44.359612 | orchestrator | 2025-05-29 00:46:44 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:46:44.364561 | orchestrator | 2025-05-29 00:46:44 | INFO  | Task 7efd09ef-3c74-4f5c-8063-c69fd30be199 is in state STARTED 2025-05-29 00:46:44.364626 | orchestrator | 2025-05-29 00:46:44 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:46:44.364641 | orchestrator | 2025-05-29 00:46:44 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:46:47.400376 | orchestrator | 2025-05-29 00:46:47 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:46:47.402224 | orchestrator | 2025-05-29 00:46:47 | INFO  | Task 7efd09ef-3c74-4f5c-8063-c69fd30be199 is in state STARTED 2025-05-29 00:46:47.403448 | orchestrator | 2025-05-29 00:46:47 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:46:47.404263 | orchestrator | 2025-05-29 00:46:47 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:46:50.469164 | orchestrator | 2025-05-29 00:46:50 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:46:50.469767 | orchestrator | 2025-05-29 00:46:50 | INFO  | Task 7efd09ef-3c74-4f5c-8063-c69fd30be199 is in state STARTED 2025-05-29 00:46:50.472210 | orchestrator | 2025-05-29 00:46:50 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:46:50.472266 | orchestrator | 2025-05-29 00:46:50 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:46:53.527273 | orchestrator | 2025-05-29 00:46:53 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:46:53.528649 | orchestrator | 2025-05-29 00:46:53 | INFO  | Task 7efd09ef-3c74-4f5c-8063-c69fd30be199 is in state STARTED 2025-05-29 00:46:53.529620 | orchestrator | 2025-05-29 00:46:53 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:46:53.529700 | orchestrator | 2025-05-29 00:46:53 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:46:56.577058 | orchestrator | 2025-05-29 00:46:56 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:46:56.577441 | orchestrator | 2025-05-29 00:46:56 | INFO  | Task 7efd09ef-3c74-4f5c-8063-c69fd30be199 is in state STARTED 2025-05-29 00:46:56.582385 | orchestrator | 2025-05-29 00:46:56 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:46:56.582475 | orchestrator | 2025-05-29 00:46:56 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:46:59.635709 | orchestrator | 2025-05-29 00:46:59 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:46:59.636274 | orchestrator | 2025-05-29 00:46:59 | INFO  | Task 7efd09ef-3c74-4f5c-8063-c69fd30be199 is in state STARTED 2025-05-29 00:46:59.637566 | orchestrator | 2025-05-29 00:46:59 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:46:59.637823 | orchestrator | 2025-05-29 00:46:59 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:47:02.681103 | orchestrator | 2025-05-29 00:47:02 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:47:02.681212 | orchestrator | 2025-05-29 00:47:02 | INFO  | Task 7efd09ef-3c74-4f5c-8063-c69fd30be199 is in state STARTED 2025-05-29 00:47:02.681669 | orchestrator | 2025-05-29 00:47:02 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:47:02.681694 | orchestrator | 2025-05-29 00:47:02 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:47:05.726307 | orchestrator | 2025-05-29 00:47:05 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:47:05.727596 | orchestrator | 2025-05-29 00:47:05 | INFO  | Task 7efd09ef-3c74-4f5c-8063-c69fd30be199 is in state STARTED 2025-05-29 00:47:05.729009 | orchestrator | 2025-05-29 00:47:05 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:47:05.729042 | orchestrator | 2025-05-29 00:47:05 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:47:08.775658 | orchestrator | 2025-05-29 00:47:08 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:47:08.776572 | orchestrator | 2025-05-29 00:47:08 | INFO  | Task 7efd09ef-3c74-4f5c-8063-c69fd30be199 is in state STARTED 2025-05-29 00:47:08.777384 | orchestrator | 2025-05-29 00:47:08 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:47:08.777536 | orchestrator | 2025-05-29 00:47:08 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:47:11.812455 | orchestrator | 2025-05-29 00:47:11 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:47:11.814064 | orchestrator | 2025-05-29 00:47:11 | INFO  | Task 7efd09ef-3c74-4f5c-8063-c69fd30be199 is in state STARTED 2025-05-29 00:47:11.815280 | orchestrator | 2025-05-29 00:47:11 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:47:11.815323 | orchestrator | 2025-05-29 00:47:11 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:47:14.861949 | orchestrator | 2025-05-29 00:47:14 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:47:14.864058 | orchestrator | 2025-05-29 00:47:14 | INFO  | Task 7efd09ef-3c74-4f5c-8063-c69fd30be199 is in state STARTED 2025-05-29 00:47:14.864091 | orchestrator | 2025-05-29 00:47:14 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:47:14.864102 | orchestrator | 2025-05-29 00:47:14 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:47:17.906332 | orchestrator | 2025-05-29 00:47:17 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:47:17.907112 | orchestrator | 2025-05-29 00:47:17 | INFO  | Task 7efd09ef-3c74-4f5c-8063-c69fd30be199 is in state STARTED 2025-05-29 00:47:17.911267 | orchestrator | 2025-05-29 00:47:17 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:47:17.911303 | orchestrator | 2025-05-29 00:47:17 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:47:20.974747 | orchestrator | 2025-05-29 00:47:20 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:47:20.976598 | orchestrator | 2025-05-29 00:47:20 | INFO  | Task 7efd09ef-3c74-4f5c-8063-c69fd30be199 is in state STARTED 2025-05-29 00:47:20.976633 | orchestrator | 2025-05-29 00:47:20 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:47:20.976646 | orchestrator | 2025-05-29 00:47:20 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:47:24.029360 | orchestrator | 2025-05-29 00:47:24 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:47:24.029544 | orchestrator | 2025-05-29 00:47:24 | INFO  | Task 7efd09ef-3c74-4f5c-8063-c69fd30be199 is in state STARTED 2025-05-29 00:47:24.030338 | orchestrator | 2025-05-29 00:47:24 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:47:24.030365 | orchestrator | 2025-05-29 00:47:24 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:47:27.072238 | orchestrator | 2025-05-29 00:47:27 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:47:27.074334 | orchestrator | 2025-05-29 00:47:27 | INFO  | Task 7efd09ef-3c74-4f5c-8063-c69fd30be199 is in state STARTED 2025-05-29 00:47:27.075088 | orchestrator | 2025-05-29 00:47:27 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:47:27.075580 | orchestrator | 2025-05-29 00:47:27 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:47:30.125788 | orchestrator | 2025-05-29 00:47:30 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:47:30.127920 | orchestrator | 2025-05-29 00:47:30 | INFO  | Task 7efd09ef-3c74-4f5c-8063-c69fd30be199 is in state STARTED 2025-05-29 00:47:30.130919 | orchestrator | 2025-05-29 00:47:30 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:47:30.130963 | orchestrator | 2025-05-29 00:47:30 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:47:33.183015 | orchestrator | 2025-05-29 00:47:33 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:47:33.183917 | orchestrator | 2025-05-29 00:47:33 | INFO  | Task 7efd09ef-3c74-4f5c-8063-c69fd30be199 is in state STARTED 2025-05-29 00:47:33.183941 | orchestrator | 2025-05-29 00:47:33 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:47:33.183950 | orchestrator | 2025-05-29 00:47:33 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:47:36.218271 | orchestrator | 2025-05-29 00:47:36 | INFO  | Task a0183480-febf-49ed-8ef0-0aa19b288009 is in state STARTED 2025-05-29 00:47:36.218420 | orchestrator | 2025-05-29 00:47:36 | INFO  | Task 974a7953-08d3-40de-9c8f-14e6f2259792 is in state STARTED 2025-05-29 00:47:36.218512 | orchestrator | 2025-05-29 00:47:36 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:47:36.221906 | orchestrator | 2025-05-29 00:47:36 | INFO  | Task 7efd09ef-3c74-4f5c-8063-c69fd30be199 is in state SUCCESS 2025-05-29 00:47:36.224931 | orchestrator | 2025-05-29 00:47:36.225017 | orchestrator | 2025-05-29 00:47:36.225033 | orchestrator | PLAY [Apply role phpmyadmin] *************************************************** 2025-05-29 00:47:36.225046 | orchestrator | 2025-05-29 00:47:36.225057 | orchestrator | TASK [osism.services.phpmyadmin : Create traefik external network] ************* 2025-05-29 00:47:36.225073 | orchestrator | Thursday 29 May 2025 00:45:32 +0000 (0:00:00.148) 0:00:00.148 ********** 2025-05-29 00:47:36.225092 | orchestrator | ok: [testbed-manager] 2025-05-29 00:47:36.225111 | orchestrator | 2025-05-29 00:47:36.225130 | orchestrator | TASK [osism.services.phpmyadmin : Create required directories] ***************** 2025-05-29 00:47:36.225148 | orchestrator | Thursday 29 May 2025 00:45:33 +0000 (0:00:00.606) 0:00:00.755 ********** 2025-05-29 00:47:36.225212 | orchestrator | changed: [testbed-manager] => (item=/opt/phpmyadmin) 2025-05-29 00:47:36.225237 | orchestrator | 2025-05-29 00:47:36.225249 | orchestrator | TASK [osism.services.phpmyadmin : Copy docker-compose.yml file] **************** 2025-05-29 00:47:36.225260 | orchestrator | Thursday 29 May 2025 00:45:33 +0000 (0:00:00.480) 0:00:01.235 ********** 2025-05-29 00:47:36.225310 | orchestrator | changed: [testbed-manager] 2025-05-29 00:47:36.225386 | orchestrator | 2025-05-29 00:47:36.225399 | orchestrator | TASK [osism.services.phpmyadmin : Manage phpmyadmin service] ******************* 2025-05-29 00:47:36.225411 | orchestrator | Thursday 29 May 2025 00:45:35 +0000 (0:00:01.206) 0:00:02.441 ********** 2025-05-29 00:47:36.225422 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage phpmyadmin service (10 retries left). 2025-05-29 00:47:36.225433 | orchestrator | ok: [testbed-manager] 2025-05-29 00:47:36.225446 | orchestrator | 2025-05-29 00:47:36.225460 | orchestrator | RUNNING HANDLER [osism.services.phpmyadmin : Restart phpmyadmin service] ******* 2025-05-29 00:47:36.225472 | orchestrator | Thursday 29 May 2025 00:46:33 +0000 (0:00:58.513) 0:01:00.955 ********** 2025-05-29 00:47:36.225485 | orchestrator | changed: [testbed-manager] 2025-05-29 00:47:36.225497 | orchestrator | 2025-05-29 00:47:36.225512 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-29 00:47:36.225525 | orchestrator | testbed-manager : ok=5  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-29 00:47:36.225539 | orchestrator | 2025-05-29 00:47:36.225551 | orchestrator | Thursday 29 May 2025 00:46:37 +0000 (0:00:03.504) 0:01:04.460 ********** 2025-05-29 00:47:36.225564 | orchestrator | =============================================================================== 2025-05-29 00:47:36.225577 | orchestrator | osism.services.phpmyadmin : Manage phpmyadmin service ------------------ 58.51s 2025-05-29 00:47:36.225590 | orchestrator | osism.services.phpmyadmin : Restart phpmyadmin service ------------------ 3.50s 2025-05-29 00:47:36.225603 | orchestrator | osism.services.phpmyadmin : Copy docker-compose.yml file ---------------- 1.21s 2025-05-29 00:47:36.225616 | orchestrator | osism.services.phpmyadmin : Create traefik external network ------------- 0.61s 2025-05-29 00:47:36.225629 | orchestrator | osism.services.phpmyadmin : Create required directories ----------------- 0.48s 2025-05-29 00:47:36.225641 | orchestrator | 2025-05-29 00:47:36.225654 | orchestrator | 2025-05-29 00:47:36.225667 | orchestrator | PLAY [Apply role common] ******************************************************* 2025-05-29 00:47:36.225680 | orchestrator | 2025-05-29 00:47:36.225692 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-05-29 00:47:36.225705 | orchestrator | Thursday 29 May 2025 00:45:14 +0000 (0:00:00.274) 0:00:00.274 ********** 2025-05-29 00:47:36.225719 | orchestrator | included: /ansible/roles/common/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-29 00:47:36.225753 | orchestrator | 2025-05-29 00:47:36.225766 | orchestrator | TASK [common : Ensuring config directories exist] ****************************** 2025-05-29 00:47:36.225778 | orchestrator | Thursday 29 May 2025 00:45:15 +0000 (0:00:01.219) 0:00:01.493 ********** 2025-05-29 00:47:36.225799 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'cron'}, 'cron']) 2025-05-29 00:47:36.225812 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'cron'}, 'cron']) 2025-05-29 00:47:36.225823 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'cron'}, 'cron']) 2025-05-29 00:47:36.225882 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-05-29 00:47:36.225894 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'cron'}, 'cron']) 2025-05-29 00:47:36.225905 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-05-29 00:47:36.225916 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'cron'}, 'cron']) 2025-05-29 00:47:36.225927 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-05-29 00:47:36.225937 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'cron'}, 'cron']) 2025-05-29 00:47:36.225948 | orchestrator | changed: [testbed-node-0] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-05-29 00:47:36.225961 | orchestrator | changed: [testbed-manager] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-05-29 00:47:36.225972 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'cron'}, 'cron']) 2025-05-29 00:47:36.225983 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-05-29 00:47:36.225993 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-05-29 00:47:36.226004 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-05-29 00:47:36.226080 | orchestrator | changed: [testbed-node-1] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-05-29 00:47:36.226095 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'fluentd'}, 'fluentd']) 2025-05-29 00:47:36.226125 | orchestrator | changed: [testbed-node-2] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-05-29 00:47:36.226137 | orchestrator | changed: [testbed-node-4] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-05-29 00:47:36.226148 | orchestrator | changed: [testbed-node-3] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-05-29 00:47:36.226159 | orchestrator | changed: [testbed-node-5] => (item=[{'service_name': 'kolla-toolbox'}, 'kolla-toolbox']) 2025-05-29 00:47:36.226170 | orchestrator | 2025-05-29 00:47:36.226180 | orchestrator | TASK [common : include_tasks] ************************************************** 2025-05-29 00:47:36.226191 | orchestrator | Thursday 29 May 2025 00:45:19 +0000 (0:00:03.332) 0:00:04.826 ********** 2025-05-29 00:47:36.226202 | orchestrator | included: /ansible/roles/common/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-29 00:47:36.226215 | orchestrator | 2025-05-29 00:47:36.226231 | orchestrator | TASK [service-cert-copy : common | Copying over extra CA certificates] ********* 2025-05-29 00:47:36.226250 | orchestrator | Thursday 29 May 2025 00:45:20 +0000 (0:00:01.653) 0:00:06.480 ********** 2025-05-29 00:47:36.226276 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-29 00:47:36.226314 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-29 00:47:36.226336 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-29 00:47:36.226365 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-29 00:47:36.226387 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-29 00:47:36.226408 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-29 00:47:36.226435 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-29 00:47:36.226448 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-29 00:47:36.226469 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-29 00:47:36.226481 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-29 00:47:36.226497 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-29 00:47:36.226509 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-29 00:47:36.226593 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-29 00:47:36.226608 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-29 00:47:36.226626 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-29 00:47:36.226651 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-29 00:47:36.226663 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-29 00:47:36.226674 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-29 00:47:36.226686 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-29 00:47:36.226697 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-29 00:47:36.226709 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-29 00:47:36.226720 | orchestrator | 2025-05-29 00:47:36.226731 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS certificate] *** 2025-05-29 00:47:36.226742 | orchestrator | Thursday 29 May 2025 00:45:25 +0000 (0:00:04.380) 0:00:10.860 ********** 2025-05-29 00:47:36.226768 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-29 00:47:36.226781 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 00:47:36.226800 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 00:47:36.226812 | orchestrator | skipping: [testbed-manager] 2025-05-29 00:47:36.226824 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-29 00:47:36.226985 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 00:47:36.226997 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 00:47:36.227009 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-29 00:47:36.227038 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 00:47:36.227049 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 00:47:36.227069 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:47:36.227080 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-29 00:47:36.227090 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:47:36.227101 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 00:47:36.227111 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 00:47:36.227121 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:47:36.227136 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-29 00:47:36.227146 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 00:47:36.227157 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 00:47:36.227167 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:47:36.227184 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-29 00:47:36.227199 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 00:47:36.227210 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 00:47:36.227220 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:47:36.227230 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-29 00:47:36.227244 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 00:47:36.227255 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 00:47:36.227265 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:47:36.227275 | orchestrator | 2025-05-29 00:47:36.227285 | orchestrator | TASK [service-cert-copy : common | Copying over backend internal TLS key] ****** 2025-05-29 00:47:36.227295 | orchestrator | Thursday 29 May 2025 00:45:26 +0000 (0:00:01.515) 0:00:12.376 ********** 2025-05-29 00:47:36.227305 | orchestrator | skipping: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-29 00:47:36.227350 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 00:47:36.227362 | orchestrator | skipping: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 00:47:36.227372 | orchestrator | skipping: [testbed-manager] 2025-05-29 00:47:36.227397 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-29 00:47:36.227408 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 00:47:36.227418 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 00:47:36.227443 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-29 00:47:36.227454 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 00:47:36.227476 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 00:47:36.227487 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:47:36.227497 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-29 00:47:36.227507 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 00:47:36.227517 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 00:47:36.227528 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-29 00:47:36.227542 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 00:47:36.227553 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 00:47:36.227563 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:47:36.227580 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-29 00:47:36.227590 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:47:36.227600 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:47:36.227615 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 00:47:36.227626 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 00:47:36.227636 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:47:36.227646 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}})  2025-05-29 00:47:36.227656 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 00:47:36.227666 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 00:47:36.227676 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:47:36.227686 | orchestrator | 2025-05-29 00:47:36.227700 | orchestrator | TASK [common : Copying over /run subdirectories conf] ************************** 2025-05-29 00:47:36.227711 | orchestrator | Thursday 29 May 2025 00:45:29 +0000 (0:00:02.133) 0:00:14.509 ********** 2025-05-29 00:47:36.227720 | orchestrator | skipping: [testbed-manager] 2025-05-29 00:47:36.227730 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:47:36.227740 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:47:36.227756 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:47:36.227766 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:47:36.227776 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:47:36.227785 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:47:36.227795 | orchestrator | 2025-05-29 00:47:36.227804 | orchestrator | TASK [common : Restart systemd-tmpfiles] *************************************** 2025-05-29 00:47:36.227814 | orchestrator | Thursday 29 May 2025 00:45:29 +0000 (0:00:00.923) 0:00:15.432 ********** 2025-05-29 00:47:36.227824 | orchestrator | skipping: [testbed-manager] 2025-05-29 00:47:36.227851 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:47:36.227861 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:47:36.227871 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:47:36.227880 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:47:36.227890 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:47:36.227900 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:47:36.227910 | orchestrator | 2025-05-29 00:47:36.227919 | orchestrator | TASK [common : Ensure fluentd image is present for label check] **************** 2025-05-29 00:47:36.227929 | orchestrator | Thursday 29 May 2025 00:45:30 +0000 (0:00:00.860) 0:00:16.293 ********** 2025-05-29 00:47:36.227939 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:47:36.227950 | orchestrator | changed: [testbed-node-1] 2025-05-29 00:47:36.227959 | orchestrator | changed: [testbed-node-2] 2025-05-29 00:47:36.227969 | orchestrator | changed: [testbed-node-4] 2025-05-29 00:47:36.227979 | orchestrator | changed: [testbed-node-3] 2025-05-29 00:47:36.227988 | orchestrator | changed: [testbed-node-5] 2025-05-29 00:47:36.227998 | orchestrator | changed: [testbed-manager] 2025-05-29 00:47:36.228008 | orchestrator | 2025-05-29 00:47:36.228018 | orchestrator | TASK [common : Fetch fluentd Docker image labels] ****************************** 2025-05-29 00:47:36.228028 | orchestrator | Thursday 29 May 2025 00:46:08 +0000 (0:00:37.250) 0:00:53.544 ********** 2025-05-29 00:47:36.228037 | orchestrator | ok: [testbed-manager] 2025-05-29 00:47:36.228053 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:47:36.228063 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:47:36.228072 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:47:36.228082 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:47:36.228092 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:47:36.228101 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:47:36.228110 | orchestrator | 2025-05-29 00:47:36.228120 | orchestrator | TASK [common : Set fluentd facts] ********************************************** 2025-05-29 00:47:36.228129 | orchestrator | Thursday 29 May 2025 00:46:10 +0000 (0:00:02.938) 0:00:56.482 ********** 2025-05-29 00:47:36.228139 | orchestrator | ok: [testbed-manager] 2025-05-29 00:47:36.228149 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:47:36.228158 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:47:36.228167 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:47:36.228177 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:47:36.228186 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:47:36.228196 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:47:36.228205 | orchestrator | 2025-05-29 00:47:36.228215 | orchestrator | TASK [common : Fetch fluentd Podman image labels] ****************************** 2025-05-29 00:47:36.228224 | orchestrator | Thursday 29 May 2025 00:46:12 +0000 (0:00:01.646) 0:00:58.129 ********** 2025-05-29 00:47:36.228234 | orchestrator | skipping: [testbed-manager] 2025-05-29 00:47:36.228244 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:47:36.228253 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:47:36.228263 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:47:36.228272 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:47:36.228281 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:47:36.228291 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:47:36.228300 | orchestrator | 2025-05-29 00:47:36.228310 | orchestrator | TASK [common : Set fluentd facts] ********************************************** 2025-05-29 00:47:36.228320 | orchestrator | Thursday 29 May 2025 00:46:13 +0000 (0:00:01.187) 0:00:59.317 ********** 2025-05-29 00:47:36.228329 | orchestrator | skipping: [testbed-manager] 2025-05-29 00:47:36.228349 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:47:36.228359 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:47:36.228368 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:47:36.228378 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:47:36.228387 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:47:36.228397 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:47:36.228406 | orchestrator | 2025-05-29 00:47:36.228416 | orchestrator | TASK [common : Copying over config.json files for services] ******************** 2025-05-29 00:47:36.228425 | orchestrator | Thursday 29 May 2025 00:46:14 +0000 (0:00:00.879) 0:01:00.196 ********** 2025-05-29 00:47:36.228435 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-29 00:47:36.228446 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-29 00:47:36.228456 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-29 00:47:36.228467 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-29 00:47:36.228482 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-29 00:47:36.228492 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-29 00:47:36.228509 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-29 00:47:36.228524 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-29 00:47:36.228535 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-29 00:47:36.228549 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-29 00:47:36.228560 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-29 00:47:36.228575 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-29 00:47:36.228586 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-29 00:47:36.228603 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-29 00:47:36.228613 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-29 00:47:36.228623 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-29 00:47:36.228638 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-29 00:47:36.228648 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-29 00:47:36.228659 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-29 00:47:36.228680 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-29 00:47:36.228691 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-29 00:47:36.228707 | orchestrator | 2025-05-29 00:47:36.228717 | orchestrator | TASK [common : Find custom fluentd input config files] ************************* 2025-05-29 00:47:36.228726 | orchestrator | Thursday 29 May 2025 00:46:19 +0000 (0:00:04.417) 0:01:04.613 ********** 2025-05-29 00:47:36.228736 | orchestrator | [WARNING]: Skipped 2025-05-29 00:47:36.228746 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' path due 2025-05-29 00:47:36.228756 | orchestrator | to this access issue: 2025-05-29 00:47:36.228766 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/input' is not a 2025-05-29 00:47:36.228775 | orchestrator | directory 2025-05-29 00:47:36.228785 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-29 00:47:36.228794 | orchestrator | 2025-05-29 00:47:36.228804 | orchestrator | TASK [common : Find custom fluentd filter config files] ************************ 2025-05-29 00:47:36.228813 | orchestrator | Thursday 29 May 2025 00:46:20 +0000 (0:00:01.129) 0:01:05.743 ********** 2025-05-29 00:47:36.228823 | orchestrator | [WARNING]: Skipped 2025-05-29 00:47:36.228859 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' path due 2025-05-29 00:47:36.228869 | orchestrator | to this access issue: 2025-05-29 00:47:36.228879 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/filter' is not a 2025-05-29 00:47:36.228888 | orchestrator | directory 2025-05-29 00:47:36.228898 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-29 00:47:36.228908 | orchestrator | 2025-05-29 00:47:36.228917 | orchestrator | TASK [common : Find custom fluentd format config files] ************************ 2025-05-29 00:47:36.228933 | orchestrator | Thursday 29 May 2025 00:46:20 +0000 (0:00:00.440) 0:01:06.183 ********** 2025-05-29 00:47:36.228950 | orchestrator | [WARNING]: Skipped 2025-05-29 00:47:36.228966 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' path due 2025-05-29 00:47:36.228980 | orchestrator | to this access issue: 2025-05-29 00:47:36.228995 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/format' is not a 2025-05-29 00:47:36.229012 | orchestrator | directory 2025-05-29 00:47:36.229029 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-29 00:47:36.229046 | orchestrator | 2025-05-29 00:47:36.229059 | orchestrator | TASK [common : Find custom fluentd output config files] ************************ 2025-05-29 00:47:36.229069 | orchestrator | Thursday 29 May 2025 00:46:21 +0000 (0:00:00.574) 0:01:06.758 ********** 2025-05-29 00:47:36.229079 | orchestrator | [WARNING]: Skipped 2025-05-29 00:47:36.229088 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' path due 2025-05-29 00:47:36.229098 | orchestrator | to this access issue: 2025-05-29 00:47:36.229108 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/fluentd/output' is not a 2025-05-29 00:47:36.229117 | orchestrator | directory 2025-05-29 00:47:36.229127 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-29 00:47:36.229136 | orchestrator | 2025-05-29 00:47:36.229146 | orchestrator | TASK [common : Copying over td-agent.conf] ************************************* 2025-05-29 00:47:36.229156 | orchestrator | Thursday 29 May 2025 00:46:21 +0000 (0:00:00.495) 0:01:07.254 ********** 2025-05-29 00:47:36.229165 | orchestrator | changed: [testbed-manager] 2025-05-29 00:47:36.229175 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:47:36.229190 | orchestrator | changed: [testbed-node-3] 2025-05-29 00:47:36.229200 | orchestrator | changed: [testbed-node-1] 2025-05-29 00:47:36.229209 | orchestrator | changed: [testbed-node-4] 2025-05-29 00:47:36.229219 | orchestrator | changed: [testbed-node-2] 2025-05-29 00:47:36.229228 | orchestrator | changed: [testbed-node-5] 2025-05-29 00:47:36.229238 | orchestrator | 2025-05-29 00:47:36.229248 | orchestrator | TASK [common : Copying over cron logrotate config file] ************************ 2025-05-29 00:47:36.229257 | orchestrator | Thursday 29 May 2025 00:46:26 +0000 (0:00:04.678) 0:01:11.933 ********** 2025-05-29 00:47:36.229267 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-05-29 00:47:36.229285 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-05-29 00:47:36.229295 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-05-29 00:47:36.229304 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-05-29 00:47:36.229314 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-05-29 00:47:36.229323 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-05-29 00:47:36.229333 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/cron-logrotate-global.conf.j2) 2025-05-29 00:47:36.229343 | orchestrator | 2025-05-29 00:47:36.229352 | orchestrator | TASK [common : Ensure RabbitMQ Erlang cookie exists] *************************** 2025-05-29 00:47:36.229362 | orchestrator | Thursday 29 May 2025 00:46:28 +0000 (0:00:02.294) 0:01:14.227 ********** 2025-05-29 00:47:36.229372 | orchestrator | changed: [testbed-manager] 2025-05-29 00:47:36.229381 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:47:36.229391 | orchestrator | changed: [testbed-node-1] 2025-05-29 00:47:36.229402 | orchestrator | changed: [testbed-node-2] 2025-05-29 00:47:36.229412 | orchestrator | changed: [testbed-node-3] 2025-05-29 00:47:36.229430 | orchestrator | changed: [testbed-node-4] 2025-05-29 00:47:36.229442 | orchestrator | changed: [testbed-node-5] 2025-05-29 00:47:36.229453 | orchestrator | 2025-05-29 00:47:36.229464 | orchestrator | TASK [common : Ensuring config directories have correct owner and permission] *** 2025-05-29 00:47:36.229475 | orchestrator | Thursday 29 May 2025 00:46:31 +0000 (0:00:02.793) 0:01:17.021 ********** 2025-05-29 00:47:36.229486 | orchestrator | ok: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-29 00:47:36.229498 | orchestrator | skipping: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 00:47:36.229510 | orchestrator | ok: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-29 00:47:36.229521 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 00:47:36.229545 | orchestrator | ok: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-29 00:47:36.229569 | orchestrator | ok: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-29 00:47:36.229587 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 00:47:36.229599 | orchestrator | ok: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-29 00:47:36.229610 | orchestrator | ok: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-29 00:47:36.229622 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 00:47:36.229633 | orchestrator | ok: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-29 00:47:36.229655 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 00:47:36.229668 | orchestrator | ok: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-29 00:47:36.229679 | orchestrator | ok: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-29 00:47:36.230002 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 00:47:36.230069 | orchestrator | ok: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-29 00:47:36.230084 | orchestrator | ok: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-29 00:47:36.230096 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 00:47:36.230107 | orchestrator | ok: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-29 00:47:36.230134 | orchestrator | ok: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-29 00:47:36.230146 | orchestrator | ok: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-29 00:47:36.230158 | orchestrator | 2025-05-29 00:47:36.230169 | orchestrator | TASK [common : Copy rabbitmq-env.conf to kolla toolbox] ************************ 2025-05-29 00:47:36.230180 | orchestrator | Thursday 29 May 2025 00:46:33 +0000 (0:00:02.086) 0:01:19.107 ********** 2025-05-29 00:47:36.230191 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-05-29 00:47:36.230202 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-05-29 00:47:36.230213 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-05-29 00:47:36.230224 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-05-29 00:47:36.230235 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-05-29 00:47:36.230245 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-05-29 00:47:36.230256 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/rabbitmq-env.conf.j2) 2025-05-29 00:47:36.230267 | orchestrator | 2025-05-29 00:47:36.230278 | orchestrator | TASK [common : Copy rabbitmq erl_inetrc to kolla toolbox] ********************** 2025-05-29 00:47:36.230301 | orchestrator | Thursday 29 May 2025 00:46:36 +0000 (0:00:02.577) 0:01:21.685 ********** 2025-05-29 00:47:36.230313 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-05-29 00:47:36.230324 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-05-29 00:47:36.230335 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-05-29 00:47:36.230346 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-05-29 00:47:36.230356 | orchestrator | changed: [testbed-node-3] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-05-29 00:47:36.230367 | orchestrator | changed: [testbed-node-4] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-05-29 00:47:36.230378 | orchestrator | changed: [testbed-node-5] => (item=/ansible/roles/common/templates/erl_inetrc.j2) 2025-05-29 00:47:36.230388 | orchestrator | 2025-05-29 00:47:36.230399 | orchestrator | TASK [common : Check common containers] **************************************** 2025-05-29 00:47:36.230410 | orchestrator | Thursday 29 May 2025 00:46:39 +0000 (0:00:03.059) 0:01:24.745 ********** 2025-05-29 00:47:36.230421 | orchestrator | changed: [testbed-manager] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-29 00:47:36.230445 | orchestrator | changed: [testbed-node-1] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-29 00:47:36.230456 | orchestrator | changed: [testbed-node-0] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-29 00:47:36.230472 | orchestrator | changed: [testbed-manager] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-29 00:47:36.230484 | orchestrator | changed: [testbed-node-2] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-29 00:47:36.230502 | orchestrator | changed: [testbed-node-3] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-29 00:47:36.230513 | orchestrator | changed: [testbed-node-0] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-29 00:47:36.230525 | orchestrator | changed: [testbed-node-1] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-29 00:47:36.230542 | orchestrator | changed: [testbed-manager] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-29 00:47:36.230554 | orchestrator | changed: [testbed-node-2] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-29 00:47:36.230570 | orchestrator | changed: [testbed-node-4] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-29 00:47:36.230582 | orchestrator | changed: [testbed-node-5] => (item={'key': 'fluentd', 'value': {'container_name': 'fluentd', 'group': 'fluentd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/fluentd:5.0.5.20241206', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/etc/kolla/fluentd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'fluentd_data:/var/lib/fluentd/data/', '/var/log/journal:/var/log/journal:ro'], 'dimensions': {}}}) 2025-05-29 00:47:36.230594 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-29 00:47:36.230613 | orchestrator | changed: [testbed-node-3] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-29 00:47:36.230626 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-29 00:47:36.230646 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-29 00:47:36.230659 | orchestrator | changed: [testbed-node-4] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-29 00:47:36.230673 | orchestrator | changed: [testbed-node-5] => (item={'key': 'kolla-toolbox', 'value': {'container_name': 'kolla_toolbox', 'group': 'kolla-toolbox', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/kolla-toolbox:18.3.0.20241206', 'environment': {'ANSIBLE_NOCOLOR': '1', 'ANSIBLE_LIBRARY': '/usr/share/ansible', 'REQUESTS_CA_BUNDLE': '/etc/ssl/certs/ca-certificates.crt'}, 'privileged': True, 'volumes': ['/etc/kolla/kolla-toolbox/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run/:/run/:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-29 00:47:36.230692 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-29 00:47:36.230703 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-29 00:47:36.230715 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cron', 'value': {'container_name': 'cron', 'group': 'cron', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cron:3.0.20241206', 'environment': {'KOLLA_LOGROTATE_SCHEDULE': 'daily'}, 'volumes': ['/etc/kolla/cron/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-29 00:47:36.230726 | orchestrator | 2025-05-29 00:47:36.230737 | orchestrator | TASK [common : Creating log volume] ******************************************** 2025-05-29 00:47:36.230748 | orchestrator | Thursday 29 May 2025 00:46:42 +0000 (0:00:03.700) 0:01:28.446 ********** 2025-05-29 00:47:36.230759 | orchestrator | changed: [testbed-manager] 2025-05-29 00:47:36.230775 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:47:36.230787 | orchestrator | changed: [testbed-node-1] 2025-05-29 00:47:36.230797 | orchestrator | changed: [testbed-node-2] 2025-05-29 00:47:36.230808 | orchestrator | changed: [testbed-node-3] 2025-05-29 00:47:36.230819 | orchestrator | changed: [testbed-node-4] 2025-05-29 00:47:36.230882 | orchestrator | changed: [testbed-node-5] 2025-05-29 00:47:36.230894 | orchestrator | 2025-05-29 00:47:36.230912 | orchestrator | TASK [common : Link kolla_logs volume to /var/log/kolla] *********************** 2025-05-29 00:47:36.230923 | orchestrator | Thursday 29 May 2025 00:46:44 +0000 (0:00:01.845) 0:01:30.292 ********** 2025-05-29 00:47:36.230934 | orchestrator | changed: [testbed-manager] 2025-05-29 00:47:36.230945 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:47:36.230955 | orchestrator | changed: [testbed-node-1] 2025-05-29 00:47:36.230966 | orchestrator | changed: [testbed-node-2] 2025-05-29 00:47:36.230977 | orchestrator | changed: [testbed-node-3] 2025-05-29 00:47:36.230987 | orchestrator | changed: [testbed-node-4] 2025-05-29 00:47:36.230998 | orchestrator | changed: [testbed-node-5] 2025-05-29 00:47:36.231008 | orchestrator | 2025-05-29 00:47:36.231019 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-05-29 00:47:36.231030 | orchestrator | Thursday 29 May 2025 00:46:46 +0000 (0:00:01.676) 0:01:31.969 ********** 2025-05-29 00:47:36.231041 | orchestrator | 2025-05-29 00:47:36.231051 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-05-29 00:47:36.231062 | orchestrator | Thursday 29 May 2025 00:46:46 +0000 (0:00:00.063) 0:01:32.032 ********** 2025-05-29 00:47:36.231073 | orchestrator | 2025-05-29 00:47:36.231084 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-05-29 00:47:36.231095 | orchestrator | Thursday 29 May 2025 00:46:46 +0000 (0:00:00.062) 0:01:32.094 ********** 2025-05-29 00:47:36.231106 | orchestrator | 2025-05-29 00:47:36.231116 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-05-29 00:47:36.231127 | orchestrator | Thursday 29 May 2025 00:46:46 +0000 (0:00:00.054) 0:01:32.149 ********** 2025-05-29 00:47:36.231138 | orchestrator | 2025-05-29 00:47:36.231148 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-05-29 00:47:36.231159 | orchestrator | Thursday 29 May 2025 00:46:46 +0000 (0:00:00.249) 0:01:32.399 ********** 2025-05-29 00:47:36.231170 | orchestrator | 2025-05-29 00:47:36.231180 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-05-29 00:47:36.231191 | orchestrator | Thursday 29 May 2025 00:46:46 +0000 (0:00:00.055) 0:01:32.454 ********** 2025-05-29 00:47:36.231202 | orchestrator | 2025-05-29 00:47:36.231213 | orchestrator | TASK [common : Flush handlers] ************************************************* 2025-05-29 00:47:36.231223 | orchestrator | Thursday 29 May 2025 00:46:47 +0000 (0:00:00.052) 0:01:32.507 ********** 2025-05-29 00:47:36.231234 | orchestrator | 2025-05-29 00:47:36.231244 | orchestrator | RUNNING HANDLER [common : Restart fluentd container] *************************** 2025-05-29 00:47:36.231255 | orchestrator | Thursday 29 May 2025 00:46:47 +0000 (0:00:00.069) 0:01:32.576 ********** 2025-05-29 00:47:36.231266 | orchestrator | changed: [testbed-manager] 2025-05-29 00:47:36.231277 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:47:36.231287 | orchestrator | changed: [testbed-node-1] 2025-05-29 00:47:36.231298 | orchestrator | changed: [testbed-node-2] 2025-05-29 00:47:36.231309 | orchestrator | changed: [testbed-node-3] 2025-05-29 00:47:36.231320 | orchestrator | changed: [testbed-node-5] 2025-05-29 00:47:36.231330 | orchestrator | changed: [testbed-node-4] 2025-05-29 00:47:36.231339 | orchestrator | 2025-05-29 00:47:36.231349 | orchestrator | RUNNING HANDLER [common : Restart kolla-toolbox container] ********************* 2025-05-29 00:47:36.231359 | orchestrator | Thursday 29 May 2025 00:46:55 +0000 (0:00:08.799) 0:01:41.376 ********** 2025-05-29 00:47:36.231368 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:47:36.231378 | orchestrator | changed: [testbed-node-2] 2025-05-29 00:47:36.231387 | orchestrator | changed: [testbed-node-5] 2025-05-29 00:47:36.231396 | orchestrator | changed: [testbed-node-3] 2025-05-29 00:47:36.231406 | orchestrator | changed: [testbed-manager] 2025-05-29 00:47:36.231415 | orchestrator | changed: [testbed-node-1] 2025-05-29 00:47:36.231424 | orchestrator | changed: [testbed-node-4] 2025-05-29 00:47:36.231434 | orchestrator | 2025-05-29 00:47:36.231448 | orchestrator | RUNNING HANDLER [common : Initializing toolbox container using normal user] **** 2025-05-29 00:47:36.231458 | orchestrator | Thursday 29 May 2025 00:47:20 +0000 (0:00:25.085) 0:02:06.461 ********** 2025-05-29 00:47:36.231473 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:47:36.231483 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:47:36.231493 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:47:36.231502 | orchestrator | ok: [testbed-manager] 2025-05-29 00:47:36.231512 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:47:36.231522 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:47:36.231531 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:47:36.231540 | orchestrator | 2025-05-29 00:47:36.231550 | orchestrator | RUNNING HANDLER [common : Restart cron container] ****************************** 2025-05-29 00:47:36.231560 | orchestrator | Thursday 29 May 2025 00:47:23 +0000 (0:00:02.886) 0:02:09.348 ********** 2025-05-29 00:47:36.231569 | orchestrator | changed: [testbed-manager] 2025-05-29 00:47:36.231579 | orchestrator | changed: [testbed-node-3] 2025-05-29 00:47:36.231588 | orchestrator | changed: [testbed-node-4] 2025-05-29 00:47:36.231598 | orchestrator | changed: [testbed-node-5] 2025-05-29 00:47:36.231607 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:47:36.231617 | orchestrator | changed: [testbed-node-1] 2025-05-29 00:47:36.231626 | orchestrator | changed: [testbed-node-2] 2025-05-29 00:47:36.231636 | orchestrator | 2025-05-29 00:47:36.231645 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-29 00:47:36.231655 | orchestrator | testbed-manager : ok=25  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-29 00:47:36.231666 | orchestrator | testbed-node-0 : ok=21  changed=14  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-29 00:47:36.231676 | orchestrator | testbed-node-1 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-29 00:47:36.231692 | orchestrator | testbed-node-2 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-29 00:47:36.231702 | orchestrator | testbed-node-3 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-29 00:47:36.231712 | orchestrator | testbed-node-4 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-29 00:47:36.231722 | orchestrator | testbed-node-5 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-29 00:47:36.231731 | orchestrator | 2025-05-29 00:47:36.231741 | orchestrator | 2025-05-29 00:47:36.231751 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-29 00:47:36.231761 | orchestrator | Thursday 29 May 2025 00:47:33 +0000 (0:00:09.208) 0:02:18.557 ********** 2025-05-29 00:47:36.231770 | orchestrator | =============================================================================== 2025-05-29 00:47:36.231780 | orchestrator | common : Ensure fluentd image is present for label check --------------- 37.25s 2025-05-29 00:47:36.231790 | orchestrator | common : Restart kolla-toolbox container ------------------------------- 25.09s 2025-05-29 00:47:36.231799 | orchestrator | common : Restart cron container ----------------------------------------- 9.21s 2025-05-29 00:47:36.231809 | orchestrator | common : Restart fluentd container -------------------------------------- 8.80s 2025-05-29 00:47:36.231819 | orchestrator | common : Copying over td-agent.conf ------------------------------------- 4.68s 2025-05-29 00:47:36.231843 | orchestrator | common : Copying over config.json files for services -------------------- 4.42s 2025-05-29 00:47:36.231853 | orchestrator | service-cert-copy : common | Copying over extra CA certificates --------- 4.38s 2025-05-29 00:47:36.231863 | orchestrator | common : Check common containers ---------------------------------------- 3.70s 2025-05-29 00:47:36.231873 | orchestrator | common : Ensuring config directories exist ------------------------------ 3.33s 2025-05-29 00:47:36.231882 | orchestrator | common : Copy rabbitmq erl_inetrc to kolla toolbox ---------------------- 3.06s 2025-05-29 00:47:36.231898 | orchestrator | common : Fetch fluentd Docker image labels ------------------------------ 2.94s 2025-05-29 00:47:36.231908 | orchestrator | common : Initializing toolbox container using normal user --------------- 2.89s 2025-05-29 00:47:36.231918 | orchestrator | common : Ensure RabbitMQ Erlang cookie exists --------------------------- 2.79s 2025-05-29 00:47:36.231927 | orchestrator | common : Copy rabbitmq-env.conf to kolla toolbox ------------------------ 2.58s 2025-05-29 00:47:36.231937 | orchestrator | common : Copying over cron logrotate config file ------------------------ 2.29s 2025-05-29 00:47:36.231946 | orchestrator | service-cert-copy : common | Copying over backend internal TLS key ------ 2.13s 2025-05-29 00:47:36.231956 | orchestrator | common : Ensuring config directories have correct owner and permission --- 2.09s 2025-05-29 00:47:36.231965 | orchestrator | common : Creating log volume -------------------------------------------- 1.85s 2025-05-29 00:47:36.231975 | orchestrator | common : Link kolla_logs volume to /var/log/kolla ----------------------- 1.68s 2025-05-29 00:47:36.231985 | orchestrator | common : include_tasks -------------------------------------------------- 1.65s 2025-05-29 00:47:36.232097 | orchestrator | 2025-05-29 00:47:36 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:47:36.232118 | orchestrator | 2025-05-29 00:47:36 | INFO  | Task 27dedc27-f6ad-47d3-9548-105ccdb7a61f is in state STARTED 2025-05-29 00:47:36.232129 | orchestrator | 2025-05-29 00:47:36 | INFO  | Task 180a196e-e849-4869-9cf5-61a64c6bfb7b is in state STARTED 2025-05-29 00:47:36.232139 | orchestrator | 2025-05-29 00:47:36 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:47:39.276443 | orchestrator | 2025-05-29 00:47:39 | INFO  | Task a0183480-febf-49ed-8ef0-0aa19b288009 is in state STARTED 2025-05-29 00:47:39.277045 | orchestrator | 2025-05-29 00:47:39 | INFO  | Task 974a7953-08d3-40de-9c8f-14e6f2259792 is in state STARTED 2025-05-29 00:47:39.277918 | orchestrator | 2025-05-29 00:47:39 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:47:39.280074 | orchestrator | 2025-05-29 00:47:39 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:47:39.280780 | orchestrator | 2025-05-29 00:47:39 | INFO  | Task 27dedc27-f6ad-47d3-9548-105ccdb7a61f is in state STARTED 2025-05-29 00:47:39.281705 | orchestrator | 2025-05-29 00:47:39 | INFO  | Task 180a196e-e849-4869-9cf5-61a64c6bfb7b is in state STARTED 2025-05-29 00:47:39.281728 | orchestrator | 2025-05-29 00:47:39 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:47:42.331679 | orchestrator | 2025-05-29 00:47:42 | INFO  | Task a0183480-febf-49ed-8ef0-0aa19b288009 is in state STARTED 2025-05-29 00:47:42.332174 | orchestrator | 2025-05-29 00:47:42 | INFO  | Task 974a7953-08d3-40de-9c8f-14e6f2259792 is in state STARTED 2025-05-29 00:47:42.333856 | orchestrator | 2025-05-29 00:47:42 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:47:42.335384 | orchestrator | 2025-05-29 00:47:42 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:47:42.336129 | orchestrator | 2025-05-29 00:47:42 | INFO  | Task 27dedc27-f6ad-47d3-9548-105ccdb7a61f is in state STARTED 2025-05-29 00:47:42.338120 | orchestrator | 2025-05-29 00:47:42 | INFO  | Task 180a196e-e849-4869-9cf5-61a64c6bfb7b is in state STARTED 2025-05-29 00:47:42.338148 | orchestrator | 2025-05-29 00:47:42 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:47:45.374427 | orchestrator | 2025-05-29 00:47:45 | INFO  | Task a0183480-febf-49ed-8ef0-0aa19b288009 is in state STARTED 2025-05-29 00:47:45.375811 | orchestrator | 2025-05-29 00:47:45 | INFO  | Task 974a7953-08d3-40de-9c8f-14e6f2259792 is in state STARTED 2025-05-29 00:47:45.376803 | orchestrator | 2025-05-29 00:47:45 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:47:45.377851 | orchestrator | 2025-05-29 00:47:45 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:47:45.379002 | orchestrator | 2025-05-29 00:47:45 | INFO  | Task 27dedc27-f6ad-47d3-9548-105ccdb7a61f is in state STARTED 2025-05-29 00:47:45.379974 | orchestrator | 2025-05-29 00:47:45 | INFO  | Task 180a196e-e849-4869-9cf5-61a64c6bfb7b is in state STARTED 2025-05-29 00:47:45.380046 | orchestrator | 2025-05-29 00:47:45 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:47:48.423321 | orchestrator | 2025-05-29 00:47:48 | INFO  | Task a0183480-febf-49ed-8ef0-0aa19b288009 is in state STARTED 2025-05-29 00:47:48.423697 | orchestrator | 2025-05-29 00:47:48 | INFO  | Task 974a7953-08d3-40de-9c8f-14e6f2259792 is in state STARTED 2025-05-29 00:47:48.424310 | orchestrator | 2025-05-29 00:47:48 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:47:48.424601 | orchestrator | 2025-05-29 00:47:48 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:47:48.425209 | orchestrator | 2025-05-29 00:47:48 | INFO  | Task 27dedc27-f6ad-47d3-9548-105ccdb7a61f is in state STARTED 2025-05-29 00:47:48.425970 | orchestrator | 2025-05-29 00:47:48 | INFO  | Task 180a196e-e849-4869-9cf5-61a64c6bfb7b is in state STARTED 2025-05-29 00:47:48.425994 | orchestrator | 2025-05-29 00:47:48 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:47:51.457979 | orchestrator | 2025-05-29 00:47:51 | INFO  | Task a0183480-febf-49ed-8ef0-0aa19b288009 is in state STARTED 2025-05-29 00:47:51.458294 | orchestrator | 2025-05-29 00:47:51 | INFO  | Task 974a7953-08d3-40de-9c8f-14e6f2259792 is in state STARTED 2025-05-29 00:47:51.458461 | orchestrator | 2025-05-29 00:47:51 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:47:51.459090 | orchestrator | 2025-05-29 00:47:51 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:47:51.459604 | orchestrator | 2025-05-29 00:47:51 | INFO  | Task 27dedc27-f6ad-47d3-9548-105ccdb7a61f is in state STARTED 2025-05-29 00:47:51.460321 | orchestrator | 2025-05-29 00:47:51 | INFO  | Task 180a196e-e849-4869-9cf5-61a64c6bfb7b is in state STARTED 2025-05-29 00:47:51.460469 | orchestrator | 2025-05-29 00:47:51 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:47:54.492496 | orchestrator | 2025-05-29 00:47:54 | INFO  | Task b4d553e8-d865-463f-8d72-4b12f126d941 is in state STARTED 2025-05-29 00:47:54.493138 | orchestrator | 2025-05-29 00:47:54 | INFO  | Task a0183480-febf-49ed-8ef0-0aa19b288009 is in state STARTED 2025-05-29 00:47:54.494347 | orchestrator | 2025-05-29 00:47:54 | INFO  | Task 974a7953-08d3-40de-9c8f-14e6f2259792 is in state SUCCESS 2025-05-29 00:47:54.499795 | orchestrator | 2025-05-29 00:47:54 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:47:54.501950 | orchestrator | 2025-05-29 00:47:54 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:47:54.504043 | orchestrator | 2025-05-29 00:47:54 | INFO  | Task 27dedc27-f6ad-47d3-9548-105ccdb7a61f is in state STARTED 2025-05-29 00:47:54.504654 | orchestrator | 2025-05-29 00:47:54 | INFO  | Task 180a196e-e849-4869-9cf5-61a64c6bfb7b is in state STARTED 2025-05-29 00:47:54.504678 | orchestrator | 2025-05-29 00:47:54 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:47:57.537255 | orchestrator | 2025-05-29 00:47:57 | INFO  | Task b4d553e8-d865-463f-8d72-4b12f126d941 is in state STARTED 2025-05-29 00:47:57.537365 | orchestrator | 2025-05-29 00:47:57 | INFO  | Task a0183480-febf-49ed-8ef0-0aa19b288009 is in state STARTED 2025-05-29 00:47:57.537409 | orchestrator | 2025-05-29 00:47:57 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:47:57.537604 | orchestrator | 2025-05-29 00:47:57 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:47:57.538219 | orchestrator | 2025-05-29 00:47:57 | INFO  | Task 27dedc27-f6ad-47d3-9548-105ccdb7a61f is in state STARTED 2025-05-29 00:47:57.541253 | orchestrator | 2025-05-29 00:47:57 | INFO  | Task 180a196e-e849-4869-9cf5-61a64c6bfb7b is in state STARTED 2025-05-29 00:47:57.541357 | orchestrator | 2025-05-29 00:47:57 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:48:00.568935 | orchestrator | 2025-05-29 00:48:00 | INFO  | Task b4d553e8-d865-463f-8d72-4b12f126d941 is in state STARTED 2025-05-29 00:48:00.569892 | orchestrator | 2025-05-29 00:48:00 | INFO  | Task a0183480-febf-49ed-8ef0-0aa19b288009 is in state STARTED 2025-05-29 00:48:00.570401 | orchestrator | 2025-05-29 00:48:00 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:48:00.570845 | orchestrator | 2025-05-29 00:48:00 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:48:00.572283 | orchestrator | 2025-05-29 00:48:00 | INFO  | Task 27dedc27-f6ad-47d3-9548-105ccdb7a61f is in state STARTED 2025-05-29 00:48:00.572415 | orchestrator | 2025-05-29 00:48:00 | INFO  | Task 180a196e-e849-4869-9cf5-61a64c6bfb7b is in state STARTED 2025-05-29 00:48:00.572480 | orchestrator | 2025-05-29 00:48:00 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:48:03.603022 | orchestrator | 2025-05-29 00:48:03 | INFO  | Task b4d553e8-d865-463f-8d72-4b12f126d941 is in state STARTED 2025-05-29 00:48:03.603306 | orchestrator | 2025-05-29 00:48:03 | INFO  | Task a0183480-febf-49ed-8ef0-0aa19b288009 is in state STARTED 2025-05-29 00:48:03.603905 | orchestrator | 2025-05-29 00:48:03 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:48:03.604645 | orchestrator | 2025-05-29 00:48:03 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:48:03.605332 | orchestrator | 2025-05-29 00:48:03 | INFO  | Task 27dedc27-f6ad-47d3-9548-105ccdb7a61f is in state STARTED 2025-05-29 00:48:03.606277 | orchestrator | 2025-05-29 00:48:03 | INFO  | Task 180a196e-e849-4869-9cf5-61a64c6bfb7b is in state STARTED 2025-05-29 00:48:03.606303 | orchestrator | 2025-05-29 00:48:03 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:48:06.632633 | orchestrator | 2025-05-29 00:48:06 | INFO  | Task b4d553e8-d865-463f-8d72-4b12f126d941 is in state STARTED 2025-05-29 00:48:06.633839 | orchestrator | 2025-05-29 00:48:06 | INFO  | Task a0183480-febf-49ed-8ef0-0aa19b288009 is in state STARTED 2025-05-29 00:48:06.634368 | orchestrator | 2025-05-29 00:48:06 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:48:06.635273 | orchestrator | 2025-05-29 00:48:06 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:48:06.635753 | orchestrator | 2025-05-29 00:48:06 | INFO  | Task 27dedc27-f6ad-47d3-9548-105ccdb7a61f is in state STARTED 2025-05-29 00:48:06.636510 | orchestrator | 2025-05-29 00:48:06 | INFO  | Task 180a196e-e849-4869-9cf5-61a64c6bfb7b is in state STARTED 2025-05-29 00:48:06.636538 | orchestrator | 2025-05-29 00:48:06 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:48:09.692239 | orchestrator | 2025-05-29 00:48:09 | INFO  | Task b4d553e8-d865-463f-8d72-4b12f126d941 is in state STARTED 2025-05-29 00:48:09.692353 | orchestrator | 2025-05-29 00:48:09 | INFO  | Task a0183480-febf-49ed-8ef0-0aa19b288009 is in state STARTED 2025-05-29 00:48:09.692398 | orchestrator | 2025-05-29 00:48:09 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:48:09.692671 | orchestrator | 2025-05-29 00:48:09 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:48:09.693570 | orchestrator | 2025-05-29 00:48:09 | INFO  | Task 27dedc27-f6ad-47d3-9548-105ccdb7a61f is in state STARTED 2025-05-29 00:48:09.694322 | orchestrator | 2025-05-29 00:48:09 | INFO  | Task 180a196e-e849-4869-9cf5-61a64c6bfb7b is in state STARTED 2025-05-29 00:48:09.694446 | orchestrator | 2025-05-29 00:48:09 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:48:12.731042 | orchestrator | 2025-05-29 00:48:12 | INFO  | Task b4d553e8-d865-463f-8d72-4b12f126d941 is in state STARTED 2025-05-29 00:48:12.734670 | orchestrator | 2025-05-29 00:48:12.734713 | orchestrator | 2025-05-29 00:48:12.734727 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-29 00:48:12.734740 | orchestrator | 2025-05-29 00:48:12.734752 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-29 00:48:12.734763 | orchestrator | Thursday 29 May 2025 00:47:37 +0000 (0:00:00.410) 0:00:00.410 ********** 2025-05-29 00:48:12.734806 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:48:12.734820 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:48:12.734831 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:48:12.734841 | orchestrator | 2025-05-29 00:48:12.734852 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-29 00:48:12.734863 | orchestrator | Thursday 29 May 2025 00:47:37 +0000 (0:00:00.691) 0:00:01.101 ********** 2025-05-29 00:48:12.734875 | orchestrator | ok: [testbed-node-0] => (item=enable_memcached_True) 2025-05-29 00:48:12.734886 | orchestrator | ok: [testbed-node-1] => (item=enable_memcached_True) 2025-05-29 00:48:12.734897 | orchestrator | ok: [testbed-node-2] => (item=enable_memcached_True) 2025-05-29 00:48:12.734907 | orchestrator | 2025-05-29 00:48:12.734919 | orchestrator | PLAY [Apply role memcached] **************************************************** 2025-05-29 00:48:12.734929 | orchestrator | 2025-05-29 00:48:12.734941 | orchestrator | TASK [memcached : include_tasks] *********************************************** 2025-05-29 00:48:12.734951 | orchestrator | Thursday 29 May 2025 00:47:38 +0000 (0:00:00.436) 0:00:01.538 ********** 2025-05-29 00:48:12.734962 | orchestrator | included: /ansible/roles/memcached/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-29 00:48:12.734973 | orchestrator | 2025-05-29 00:48:12.734984 | orchestrator | TASK [memcached : Ensuring config directories exist] *************************** 2025-05-29 00:48:12.734995 | orchestrator | Thursday 29 May 2025 00:47:39 +0000 (0:00:01.086) 0:00:02.624 ********** 2025-05-29 00:48:12.735006 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-05-29 00:48:12.735017 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-05-29 00:48:12.735028 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-05-29 00:48:12.735039 | orchestrator | 2025-05-29 00:48:12.735049 | orchestrator | TASK [memcached : Copying over config.json files for services] ***************** 2025-05-29 00:48:12.735060 | orchestrator | Thursday 29 May 2025 00:47:40 +0000 (0:00:01.032) 0:00:03.657 ********** 2025-05-29 00:48:12.735071 | orchestrator | changed: [testbed-node-0] => (item=memcached) 2025-05-29 00:48:12.735083 | orchestrator | changed: [testbed-node-2] => (item=memcached) 2025-05-29 00:48:12.735094 | orchestrator | changed: [testbed-node-1] => (item=memcached) 2025-05-29 00:48:12.735104 | orchestrator | 2025-05-29 00:48:12.735115 | orchestrator | TASK [memcached : Check memcached container] *********************************** 2025-05-29 00:48:12.735126 | orchestrator | Thursday 29 May 2025 00:47:43 +0000 (0:00:02.649) 0:00:06.307 ********** 2025-05-29 00:48:12.735137 | orchestrator | changed: [testbed-node-1] 2025-05-29 00:48:12.735147 | orchestrator | changed: [testbed-node-2] 2025-05-29 00:48:12.735158 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:48:12.735169 | orchestrator | 2025-05-29 00:48:12.735180 | orchestrator | RUNNING HANDLER [memcached : Restart memcached container] ********************** 2025-05-29 00:48:12.735215 | orchestrator | Thursday 29 May 2025 00:47:45 +0000 (0:00:02.539) 0:00:08.847 ********** 2025-05-29 00:48:12.735235 | orchestrator | changed: [testbed-node-1] 2025-05-29 00:48:12.735251 | orchestrator | changed: [testbed-node-2] 2025-05-29 00:48:12.735262 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:48:12.735275 | orchestrator | 2025-05-29 00:48:12.735287 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-29 00:48:12.735301 | orchestrator | testbed-node-0 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-29 00:48:12.735315 | orchestrator | testbed-node-1 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-29 00:48:12.735327 | orchestrator | testbed-node-2 : ok=7  changed=4  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-29 00:48:12.735340 | orchestrator | 2025-05-29 00:48:12.735352 | orchestrator | 2025-05-29 00:48:12.735379 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-29 00:48:12.735392 | orchestrator | Thursday 29 May 2025 00:47:52 +0000 (0:00:06.893) 0:00:15.740 ********** 2025-05-29 00:48:12.735404 | orchestrator | =============================================================================== 2025-05-29 00:48:12.735417 | orchestrator | memcached : Restart memcached container --------------------------------- 6.89s 2025-05-29 00:48:12.735430 | orchestrator | memcached : Copying over config.json files for services ----------------- 2.65s 2025-05-29 00:48:12.735442 | orchestrator | memcached : Check memcached container ----------------------------------- 2.54s 2025-05-29 00:48:12.735455 | orchestrator | memcached : include_tasks ----------------------------------------------- 1.09s 2025-05-29 00:48:12.735468 | orchestrator | memcached : Ensuring config directories exist --------------------------- 1.03s 2025-05-29 00:48:12.735480 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.69s 2025-05-29 00:48:12.735493 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.44s 2025-05-29 00:48:12.735505 | orchestrator | 2025-05-29 00:48:12.735518 | orchestrator | 2025-05-29 00:48:12.735531 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-29 00:48:12.735542 | orchestrator | 2025-05-29 00:48:12.735552 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-29 00:48:12.735563 | orchestrator | Thursday 29 May 2025 00:47:37 +0000 (0:00:00.668) 0:00:00.668 ********** 2025-05-29 00:48:12.735574 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:48:12.735585 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:48:12.735596 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:48:12.735607 | orchestrator | 2025-05-29 00:48:12.735618 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-29 00:48:12.735641 | orchestrator | Thursday 29 May 2025 00:47:38 +0000 (0:00:00.886) 0:00:01.554 ********** 2025-05-29 00:48:12.735653 | orchestrator | ok: [testbed-node-0] => (item=enable_redis_True) 2025-05-29 00:48:12.735664 | orchestrator | ok: [testbed-node-1] => (item=enable_redis_True) 2025-05-29 00:48:12.735675 | orchestrator | ok: [testbed-node-2] => (item=enable_redis_True) 2025-05-29 00:48:12.735686 | orchestrator | 2025-05-29 00:48:12.735697 | orchestrator | PLAY [Apply role redis] ******************************************************** 2025-05-29 00:48:12.735708 | orchestrator | 2025-05-29 00:48:12.735718 | orchestrator | TASK [redis : include_tasks] *************************************************** 2025-05-29 00:48:12.735729 | orchestrator | Thursday 29 May 2025 00:47:39 +0000 (0:00:00.847) 0:00:02.402 ********** 2025-05-29 00:48:12.735740 | orchestrator | included: /ansible/roles/redis/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-29 00:48:12.735751 | orchestrator | 2025-05-29 00:48:12.735762 | orchestrator | TASK [redis : Ensuring config directories exist] ******************************* 2025-05-29 00:48:12.735773 | orchestrator | Thursday 29 May 2025 00:47:40 +0000 (0:00:01.569) 0:00:03.971 ********** 2025-05-29 00:48:12.735813 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-29 00:48:12.735831 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-29 00:48:12.735843 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-29 00:48:12.735862 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-29 00:48:12.735874 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-29 00:48:12.735896 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-29 00:48:12.735908 | orchestrator | 2025-05-29 00:48:12.735919 | orchestrator | TASK [redis : Copying over default config.json files] ************************** 2025-05-29 00:48:12.735938 | orchestrator | Thursday 29 May 2025 00:47:42 +0000 (0:00:02.056) 0:00:06.028 ********** 2025-05-29 00:48:12.735950 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-29 00:48:12.735962 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-29 00:48:12.735974 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-29 00:48:12.735990 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-29 00:48:12.736002 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-29 00:48:12.736031 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-29 00:48:12.736049 | orchestrator | 2025-05-29 00:48:12.736060 | orchestrator | TASK [redis : Copying over redis config files] ********************************* 2025-05-29 00:48:12.736071 | orchestrator | Thursday 29 May 2025 00:47:45 +0000 (0:00:02.894) 0:00:08.923 ********** 2025-05-29 00:48:12.736083 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-29 00:48:12.736095 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-29 00:48:12.736106 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-29 00:48:12.736123 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-29 00:48:12.736135 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-29 00:48:12.736154 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-29 00:48:12.736172 | orchestrator | 2025-05-29 00:48:12.736183 | orchestrator | TASK [redis : Check redis containers] ****************************************** 2025-05-29 00:48:12.736194 | orchestrator | Thursday 29 May 2025 00:47:48 +0000 (0:00:03.413) 0:00:12.336 ********** 2025-05-29 00:48:12.736205 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-29 00:48:12.736217 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-29 00:48:12.736229 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis', 'value': {'container_name': 'redis', 'group': 'redis', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis:6.0.16.20241206', 'volumes': ['/etc/kolla/redis/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'redis:/var/lib/redis/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-server 6379'], 'timeout': '30'}}}) 2025-05-29 00:48:12.736240 | orchestrator | changed: [testbed-node-1] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-29 00:48:12.736252 | orchestrator | changed: [testbed-node-2] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-29 00:48:12.736276 | orchestrator | changed: [testbed-node-0] => (item={'key': 'redis-sentinel', 'value': {'container_name': 'redis_sentinel', 'group': 'redis', 'environment': {'REDIS_CONF': '/etc/redis/redis.conf', 'REDIS_GEN_CONF': '/etc/redis/redis-regenerated-by-config-rewrite.conf'}, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/redis-sentinel:6.0.16.20241206', 'volumes': ['/etc/kolla/redis-sentinel/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen redis-sentinel 26379'], 'timeout': '30'}}}) 2025-05-29 00:48:12.736295 | orchestrator | 2025-05-29 00:48:12.736306 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-05-29 00:48:12.736355 | orchestrator | Thursday 29 May 2025 00:47:51 +0000 (0:00:02.434) 0:00:14.770 ********** 2025-05-29 00:48:12.736369 | orchestrator | 2025-05-29 00:48:12.736381 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-05-29 00:48:12.736392 | orchestrator | Thursday 29 May 2025 00:47:51 +0000 (0:00:00.070) 0:00:14.841 ********** 2025-05-29 00:48:12.736403 | orchestrator | 2025-05-29 00:48:12.736414 | orchestrator | TASK [redis : Flush handlers] ************************************************** 2025-05-29 00:48:12.736425 | orchestrator | Thursday 29 May 2025 00:47:51 +0000 (0:00:00.053) 0:00:14.895 ********** 2025-05-29 00:48:12.736436 | orchestrator | 2025-05-29 00:48:12.736447 | orchestrator | RUNNING HANDLER [redis : Restart redis container] ****************************** 2025-05-29 00:48:12.736457 | orchestrator | Thursday 29 May 2025 00:47:51 +0000 (0:00:00.088) 0:00:14.983 ********** 2025-05-29 00:48:12.736468 | orchestrator | changed: [testbed-node-1] 2025-05-29 00:48:12.736480 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:48:12.736490 | orchestrator | changed: [testbed-node-2] 2025-05-29 00:48:12.736502 | orchestrator | 2025-05-29 00:48:12.736513 | orchestrator | RUNNING HANDLER [redis : Restart redis-sentinel container] ********************* 2025-05-29 00:48:12.736523 | orchestrator | Thursday 29 May 2025 00:48:01 +0000 (0:00:10.135) 0:00:25.118 ********** 2025-05-29 00:48:12.736534 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:48:12.736546 | orchestrator | changed: [testbed-node-2] 2025-05-29 00:48:12.736556 | orchestrator | changed: [testbed-node-1] 2025-05-29 00:48:12.736567 | orchestrator | 2025-05-29 00:48:12.736578 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-29 00:48:12.736590 | orchestrator | testbed-node-0 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-29 00:48:12.736601 | orchestrator | testbed-node-1 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-29 00:48:12.736613 | orchestrator | testbed-node-2 : ok=9  changed=6  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-29 00:48:12.736624 | orchestrator | 2025-05-29 00:48:12.736635 | orchestrator | 2025-05-29 00:48:12.736645 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-29 00:48:12.736656 | orchestrator | Thursday 29 May 2025 00:48:10 +0000 (0:00:08.758) 0:00:33.876 ********** 2025-05-29 00:48:12.736667 | orchestrator | =============================================================================== 2025-05-29 00:48:12.736678 | orchestrator | redis : Restart redis container ---------------------------------------- 10.13s 2025-05-29 00:48:12.736689 | orchestrator | redis : Restart redis-sentinel container -------------------------------- 8.76s 2025-05-29 00:48:12.736700 | orchestrator | redis : Copying over redis config files --------------------------------- 3.41s 2025-05-29 00:48:12.736710 | orchestrator | redis : Copying over default config.json files -------------------------- 2.90s 2025-05-29 00:48:12.736721 | orchestrator | redis : Check redis containers ------------------------------------------ 2.43s 2025-05-29 00:48:12.736732 | orchestrator | redis : Ensuring config directories exist ------------------------------- 2.06s 2025-05-29 00:48:12.736743 | orchestrator | redis : include_tasks --------------------------------------------------- 1.57s 2025-05-29 00:48:12.736754 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.89s 2025-05-29 00:48:12.736765 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.85s 2025-05-29 00:48:12.736809 | orchestrator | redis : Flush handlers -------------------------------------------------- 0.21s 2025-05-29 00:48:12.736933 | orchestrator | 2025-05-29 00:48:12 | INFO  | Task a0183480-febf-49ed-8ef0-0aa19b288009 is in state SUCCESS 2025-05-29 00:48:12.736949 | orchestrator | 2025-05-29 00:48:12 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:48:12.736960 | orchestrator | 2025-05-29 00:48:12 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:48:12.736971 | orchestrator | 2025-05-29 00:48:12 | INFO  | Task 27dedc27-f6ad-47d3-9548-105ccdb7a61f is in state STARTED 2025-05-29 00:48:12.736982 | orchestrator | 2025-05-29 00:48:12 | INFO  | Task 180a196e-e849-4869-9cf5-61a64c6bfb7b is in state STARTED 2025-05-29 00:48:12.736993 | orchestrator | 2025-05-29 00:48:12 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:48:15.782954 | orchestrator | 2025-05-29 00:48:15 | INFO  | Task b4d553e8-d865-463f-8d72-4b12f126d941 is in state STARTED 2025-05-29 00:48:15.783067 | orchestrator | 2025-05-29 00:48:15 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:48:15.783085 | orchestrator | 2025-05-29 00:48:15 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:48:15.783172 | orchestrator | 2025-05-29 00:48:15 | INFO  | Task 27dedc27-f6ad-47d3-9548-105ccdb7a61f is in state STARTED 2025-05-29 00:48:15.783991 | orchestrator | 2025-05-29 00:48:15 | INFO  | Task 180a196e-e849-4869-9cf5-61a64c6bfb7b is in state STARTED 2025-05-29 00:48:15.784016 | orchestrator | 2025-05-29 00:48:15 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:48:18.820606 | orchestrator | 2025-05-29 00:48:18 | INFO  | Task b4d553e8-d865-463f-8d72-4b12f126d941 is in state STARTED 2025-05-29 00:48:18.820884 | orchestrator | 2025-05-29 00:48:18 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:48:18.822197 | orchestrator | 2025-05-29 00:48:18 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:48:18.823046 | orchestrator | 2025-05-29 00:48:18 | INFO  | Task 27dedc27-f6ad-47d3-9548-105ccdb7a61f is in state STARTED 2025-05-29 00:48:18.823731 | orchestrator | 2025-05-29 00:48:18 | INFO  | Task 180a196e-e849-4869-9cf5-61a64c6bfb7b is in state STARTED 2025-05-29 00:48:18.823756 | orchestrator | 2025-05-29 00:48:18 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:48:21.858443 | orchestrator | 2025-05-29 00:48:21 | INFO  | Task b4d553e8-d865-463f-8d72-4b12f126d941 is in state STARTED 2025-05-29 00:48:21.858560 | orchestrator | 2025-05-29 00:48:21 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:48:21.858641 | orchestrator | 2025-05-29 00:48:21 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:48:21.858731 | orchestrator | 2025-05-29 00:48:21 | INFO  | Task 27dedc27-f6ad-47d3-9548-105ccdb7a61f is in state STARTED 2025-05-29 00:48:21.860997 | orchestrator | 2025-05-29 00:48:21 | INFO  | Task 180a196e-e849-4869-9cf5-61a64c6bfb7b is in state STARTED 2025-05-29 00:48:21.861081 | orchestrator | 2025-05-29 00:48:21 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:48:24.904689 | orchestrator | 2025-05-29 00:48:24 | INFO  | Task b4d553e8-d865-463f-8d72-4b12f126d941 is in state STARTED 2025-05-29 00:48:24.905406 | orchestrator | 2025-05-29 00:48:24 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:48:24.906566 | orchestrator | 2025-05-29 00:48:24 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:48:24.910821 | orchestrator | 2025-05-29 00:48:24 | INFO  | Task 27dedc27-f6ad-47d3-9548-105ccdb7a61f is in state STARTED 2025-05-29 00:48:24.911579 | orchestrator | 2025-05-29 00:48:24 | INFO  | Task 180a196e-e849-4869-9cf5-61a64c6bfb7b is in state STARTED 2025-05-29 00:48:24.911625 | orchestrator | 2025-05-29 00:48:24 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:48:27.974709 | orchestrator | 2025-05-29 00:48:27 | INFO  | Task b4d553e8-d865-463f-8d72-4b12f126d941 is in state STARTED 2025-05-29 00:48:27.980468 | orchestrator | 2025-05-29 00:48:27 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:48:27.981150 | orchestrator | 2025-05-29 00:48:27 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:48:27.982953 | orchestrator | 2025-05-29 00:48:27 | INFO  | Task 27dedc27-f6ad-47d3-9548-105ccdb7a61f is in state STARTED 2025-05-29 00:48:27.983655 | orchestrator | 2025-05-29 00:48:27 | INFO  | Task 180a196e-e849-4869-9cf5-61a64c6bfb7b is in state STARTED 2025-05-29 00:48:27.983679 | orchestrator | 2025-05-29 00:48:27 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:48:31.029084 | orchestrator | 2025-05-29 00:48:31 | INFO  | Task b4d553e8-d865-463f-8d72-4b12f126d941 is in state STARTED 2025-05-29 00:48:31.029294 | orchestrator | 2025-05-29 00:48:31 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:48:31.029359 | orchestrator | 2025-05-29 00:48:31 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:48:31.031046 | orchestrator | 2025-05-29 00:48:31 | INFO  | Task 27dedc27-f6ad-47d3-9548-105ccdb7a61f is in state STARTED 2025-05-29 00:48:31.031589 | orchestrator | 2025-05-29 00:48:31 | INFO  | Task 180a196e-e849-4869-9cf5-61a64c6bfb7b is in state STARTED 2025-05-29 00:48:31.031613 | orchestrator | 2025-05-29 00:48:31 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:48:34.078090 | orchestrator | 2025-05-29 00:48:34 | INFO  | Task b4d553e8-d865-463f-8d72-4b12f126d941 is in state STARTED 2025-05-29 00:48:34.079561 | orchestrator | 2025-05-29 00:48:34 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:48:34.079594 | orchestrator | 2025-05-29 00:48:34 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:48:34.079605 | orchestrator | 2025-05-29 00:48:34 | INFO  | Task 27dedc27-f6ad-47d3-9548-105ccdb7a61f is in state STARTED 2025-05-29 00:48:34.082941 | orchestrator | 2025-05-29 00:48:34 | INFO  | Task 180a196e-e849-4869-9cf5-61a64c6bfb7b is in state STARTED 2025-05-29 00:48:34.082969 | orchestrator | 2025-05-29 00:48:34 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:48:37.125850 | orchestrator | 2025-05-29 00:48:37 | INFO  | Task b4d553e8-d865-463f-8d72-4b12f126d941 is in state STARTED 2025-05-29 00:48:37.130731 | orchestrator | 2025-05-29 00:48:37 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:48:37.131919 | orchestrator | 2025-05-29 00:48:37 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:48:37.134595 | orchestrator | 2025-05-29 00:48:37 | INFO  | Task 27dedc27-f6ad-47d3-9548-105ccdb7a61f is in state STARTED 2025-05-29 00:48:37.134631 | orchestrator | 2025-05-29 00:48:37 | INFO  | Task 180a196e-e849-4869-9cf5-61a64c6bfb7b is in state STARTED 2025-05-29 00:48:37.134695 | orchestrator | 2025-05-29 00:48:37 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:48:40.172983 | orchestrator | 2025-05-29 00:48:40 | INFO  | Task b4d553e8-d865-463f-8d72-4b12f126d941 is in state STARTED 2025-05-29 00:48:40.173540 | orchestrator | 2025-05-29 00:48:40 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:48:40.174433 | orchestrator | 2025-05-29 00:48:40 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:48:40.175422 | orchestrator | 2025-05-29 00:48:40 | INFO  | Task 27dedc27-f6ad-47d3-9548-105ccdb7a61f is in state STARTED 2025-05-29 00:48:40.176841 | orchestrator | 2025-05-29 00:48:40 | INFO  | Task 180a196e-e849-4869-9cf5-61a64c6bfb7b is in state STARTED 2025-05-29 00:48:40.179445 | orchestrator | 2025-05-29 00:48:40 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:48:43.232262 | orchestrator | 2025-05-29 00:48:43 | INFO  | Task b4d553e8-d865-463f-8d72-4b12f126d941 is in state STARTED 2025-05-29 00:48:43.232725 | orchestrator | 2025-05-29 00:48:43 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:48:43.233679 | orchestrator | 2025-05-29 00:48:43 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:48:43.235333 | orchestrator | 2025-05-29 00:48:43 | INFO  | Task 27dedc27-f6ad-47d3-9548-105ccdb7a61f is in state STARTED 2025-05-29 00:48:43.236580 | orchestrator | 2025-05-29 00:48:43 | INFO  | Task 180a196e-e849-4869-9cf5-61a64c6bfb7b is in state STARTED 2025-05-29 00:48:43.236670 | orchestrator | 2025-05-29 00:48:43 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:48:46.287318 | orchestrator | 2025-05-29 00:48:46 | INFO  | Task b4d553e8-d865-463f-8d72-4b12f126d941 is in state STARTED 2025-05-29 00:48:46.288152 | orchestrator | 2025-05-29 00:48:46 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:48:46.291057 | orchestrator | 2025-05-29 00:48:46 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:48:46.292589 | orchestrator | 2025-05-29 00:48:46 | INFO  | Task 27dedc27-f6ad-47d3-9548-105ccdb7a61f is in state STARTED 2025-05-29 00:48:46.296525 | orchestrator | 2025-05-29 00:48:46 | INFO  | Task 180a196e-e849-4869-9cf5-61a64c6bfb7b is in state STARTED 2025-05-29 00:48:46.296600 | orchestrator | 2025-05-29 00:48:46 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:48:49.348310 | orchestrator | 2025-05-29 00:48:49 | INFO  | Task b4d553e8-d865-463f-8d72-4b12f126d941 is in state STARTED 2025-05-29 00:48:49.350393 | orchestrator | 2025-05-29 00:48:49 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:48:49.351098 | orchestrator | 2025-05-29 00:48:49 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:48:49.351836 | orchestrator | 2025-05-29 00:48:49 | INFO  | Task 27dedc27-f6ad-47d3-9548-105ccdb7a61f is in state STARTED 2025-05-29 00:48:49.355639 | orchestrator | 2025-05-29 00:48:49 | INFO  | Task 180a196e-e849-4869-9cf5-61a64c6bfb7b is in state STARTED 2025-05-29 00:48:49.355690 | orchestrator | 2025-05-29 00:48:49 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:48:52.398911 | orchestrator | 2025-05-29 00:48:52 | INFO  | Task b4d553e8-d865-463f-8d72-4b12f126d941 is in state STARTED 2025-05-29 00:48:52.399698 | orchestrator | 2025-05-29 00:48:52 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:48:52.403105 | orchestrator | 2025-05-29 00:48:52 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:48:52.403946 | orchestrator | 2025-05-29 00:48:52 | INFO  | Task 27dedc27-f6ad-47d3-9548-105ccdb7a61f is in state STARTED 2025-05-29 00:48:52.408360 | orchestrator | 2025-05-29 00:48:52.408419 | orchestrator | 2025-05-29 00:48:52 | INFO  | Task 180a196e-e849-4869-9cf5-61a64c6bfb7b is in state SUCCESS 2025-05-29 00:48:52.410611 | orchestrator | 2025-05-29 00:48:52.410656 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-29 00:48:52.410692 | orchestrator | 2025-05-29 00:48:52.410704 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-29 00:48:52.410716 | orchestrator | Thursday 29 May 2025 00:47:37 +0000 (0:00:00.435) 0:00:00.435 ********** 2025-05-29 00:48:52.410959 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:48:52.410979 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:48:52.410990 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:48:52.411001 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:48:52.411012 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:48:52.411023 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:48:52.411034 | orchestrator | 2025-05-29 00:48:52.411045 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-29 00:48:52.411056 | orchestrator | Thursday 29 May 2025 00:47:38 +0000 (0:00:00.916) 0:00:01.352 ********** 2025-05-29 00:48:52.411067 | orchestrator | ok: [testbed-node-0] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-05-29 00:48:52.411078 | orchestrator | ok: [testbed-node-1] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-05-29 00:48:52.411089 | orchestrator | ok: [testbed-node-2] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-05-29 00:48:52.411099 | orchestrator | ok: [testbed-node-3] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-05-29 00:48:52.411110 | orchestrator | ok: [testbed-node-4] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-05-29 00:48:52.411121 | orchestrator | ok: [testbed-node-5] => (item=enable_openvswitch_True_enable_ovs_dpdk_False) 2025-05-29 00:48:52.411132 | orchestrator | 2025-05-29 00:48:52.411142 | orchestrator | PLAY [Apply role openvswitch] ************************************************** 2025-05-29 00:48:52.411153 | orchestrator | 2025-05-29 00:48:52.411166 | orchestrator | TASK [openvswitch : include_tasks] ********************************************* 2025-05-29 00:48:52.411179 | orchestrator | Thursday 29 May 2025 00:47:39 +0000 (0:00:01.551) 0:00:02.904 ********** 2025-05-29 00:48:52.411192 | orchestrator | included: /ansible/roles/openvswitch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-29 00:48:52.411207 | orchestrator | 2025-05-29 00:48:52.411220 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-05-29 00:48:52.411232 | orchestrator | Thursday 29 May 2025 00:47:42 +0000 (0:00:02.224) 0:00:05.128 ********** 2025-05-29 00:48:52.411245 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-05-29 00:48:52.411258 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-05-29 00:48:52.411270 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-05-29 00:48:52.411282 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-05-29 00:48:52.411295 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-05-29 00:48:52.411307 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-05-29 00:48:52.411320 | orchestrator | 2025-05-29 00:48:52.411332 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-05-29 00:48:52.411344 | orchestrator | Thursday 29 May 2025 00:47:43 +0000 (0:00:01.441) 0:00:06.570 ********** 2025-05-29 00:48:52.411357 | orchestrator | changed: [testbed-node-2] => (item=openvswitch) 2025-05-29 00:48:52.411370 | orchestrator | changed: [testbed-node-0] => (item=openvswitch) 2025-05-29 00:48:52.411382 | orchestrator | changed: [testbed-node-4] => (item=openvswitch) 2025-05-29 00:48:52.411395 | orchestrator | changed: [testbed-node-3] => (item=openvswitch) 2025-05-29 00:48:52.411407 | orchestrator | changed: [testbed-node-1] => (item=openvswitch) 2025-05-29 00:48:52.411420 | orchestrator | changed: [testbed-node-5] => (item=openvswitch) 2025-05-29 00:48:52.411432 | orchestrator | 2025-05-29 00:48:52.411444 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-05-29 00:48:52.411457 | orchestrator | Thursday 29 May 2025 00:47:46 +0000 (0:00:02.408) 0:00:08.978 ********** 2025-05-29 00:48:52.411479 | orchestrator | skipping: [testbed-node-0] => (item=openvswitch)  2025-05-29 00:48:52.411492 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:48:52.411521 | orchestrator | skipping: [testbed-node-1] => (item=openvswitch)  2025-05-29 00:48:52.411541 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:48:52.411558 | orchestrator | skipping: [testbed-node-2] => (item=openvswitch)  2025-05-29 00:48:52.411578 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:48:52.411597 | orchestrator | skipping: [testbed-node-3] => (item=openvswitch)  2025-05-29 00:48:52.411617 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:48:52.411635 | orchestrator | skipping: [testbed-node-4] => (item=openvswitch)  2025-05-29 00:48:52.411647 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:48:52.411657 | orchestrator | skipping: [testbed-node-5] => (item=openvswitch)  2025-05-29 00:48:52.411668 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:48:52.411679 | orchestrator | 2025-05-29 00:48:52.411690 | orchestrator | TASK [openvswitch : Create /run/openvswitch directory on host] ***************** 2025-05-29 00:48:52.411701 | orchestrator | Thursday 29 May 2025 00:47:47 +0000 (0:00:01.629) 0:00:10.608 ********** 2025-05-29 00:48:52.411712 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:48:52.411750 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:48:52.411762 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:48:52.411773 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:48:52.411784 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:48:52.411795 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:48:52.411806 | orchestrator | 2025-05-29 00:48:52.411816 | orchestrator | TASK [openvswitch : Ensuring config directories exist] ************************* 2025-05-29 00:48:52.411827 | orchestrator | Thursday 29 May 2025 00:47:48 +0000 (0:00:00.610) 0:00:11.218 ********** 2025-05-29 00:48:52.411861 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-29 00:48:52.411877 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-29 00:48:52.411890 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-29 00:48:52.411902 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-29 00:48:52.411928 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-29 00:48:52.411946 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-29 00:48:52.411959 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-29 00:48:52.411971 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-29 00:48:52.411982 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-29 00:48:52.412004 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-29 00:48:52.412016 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-29 00:48:52.412034 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-29 00:48:52.412046 | orchestrator | 2025-05-29 00:48:52.412057 | orchestrator | TASK [openvswitch : Copying over config.json files for services] *************** 2025-05-29 00:48:52.412068 | orchestrator | Thursday 29 May 2025 00:47:50 +0000 (0:00:01.912) 0:00:13.131 ********** 2025-05-29 00:48:52.412080 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-29 00:48:52.412092 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-29 00:48:52.412110 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-29 00:48:52.412126 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-29 00:48:52.412138 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-29 00:48:52.412156 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-29 00:48:52.412168 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-29 00:48:52.412179 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-29 00:48:52.412203 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-29 00:48:52.412215 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-29 00:48:52.412232 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-29 00:48:52.412244 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-29 00:48:52.412255 | orchestrator | 2025-05-29 00:48:52.412266 | orchestrator | TASK [openvswitch : Copying over start-ovs file for openvswitch-vswitchd] ****** 2025-05-29 00:48:52.412277 | orchestrator | Thursday 29 May 2025 00:47:53 +0000 (0:00:02.940) 0:00:16.071 ********** 2025-05-29 00:48:52.412288 | orchestrator | changed: [testbed-node-1] 2025-05-29 00:48:52.412299 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:48:52.412310 | orchestrator | changed: [testbed-node-2] 2025-05-29 00:48:52.412321 | orchestrator | changed: [testbed-node-4] 2025-05-29 00:48:52.412332 | orchestrator | changed: [testbed-node-3] 2025-05-29 00:48:52.412349 | orchestrator | changed: [testbed-node-5] 2025-05-29 00:48:52.412360 | orchestrator | 2025-05-29 00:48:52.412371 | orchestrator | TASK [openvswitch : Copying over start-ovsdb-server files for openvswitch-db-server] *** 2025-05-29 00:48:52.412382 | orchestrator | Thursday 29 May 2025 00:47:56 +0000 (0:00:02.963) 0:00:19.035 ********** 2025-05-29 00:48:52.412392 | orchestrator | changed: [testbed-node-1] 2025-05-29 00:48:52.412403 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:48:52.412414 | orchestrator | changed: [testbed-node-2] 2025-05-29 00:48:52.412424 | orchestrator | changed: [testbed-node-3] 2025-05-29 00:48:52.412435 | orchestrator | changed: [testbed-node-4] 2025-05-29 00:48:52.412446 | orchestrator | changed: [testbed-node-5] 2025-05-29 00:48:52.412456 | orchestrator | 2025-05-29 00:48:52.412467 | orchestrator | TASK [openvswitch : Copying over ovs-vsctl wrapper] **************************** 2025-05-29 00:48:52.412478 | orchestrator | Thursday 29 May 2025 00:47:59 +0000 (0:00:02.971) 0:00:22.007 ********** 2025-05-29 00:48:52.412489 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:48:52.412499 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:48:52.412515 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:48:52.412537 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:48:52.412566 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:48:52.412582 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:48:52.412599 | orchestrator | 2025-05-29 00:48:52.412617 | orchestrator | TASK [openvswitch : Check openvswitch containers] ****************************** 2025-05-29 00:48:52.412635 | orchestrator | Thursday 29 May 2025 00:48:00 +0000 (0:00:01.238) 0:00:23.245 ********** 2025-05-29 00:48:52.412665 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-29 00:48:52.412686 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-29 00:48:52.412712 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-29 00:48:52.412793 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-29 00:48:52.412820 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-29 00:48:52.412831 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-db-server', 'value': {'container_name': 'openvswitch_db', 'image': 'registry.osism.tech/kolla/release/openvswitch-db-server:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'volumes': ['/etc/kolla/openvswitch-db-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', 'openvswitch_db:/var/lib/openvswitch/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovsdb-client list-dbs'], 'timeout': '30'}}}) 2025-05-29 00:48:52.412843 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-29 00:48:52.412855 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-29 00:48:52.412876 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-29 00:48:52.412901 | orchestrator | changed: [testbed-node-3] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-29 00:48:52.412920 | orchestrator | changed: [testbed-node-4] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-29 00:48:52.412932 | orchestrator | changed: [testbed-node-5] => (item={'key': 'openvswitch-vswitchd', 'value': {'container_name': 'openvswitch_vswitchd', 'image': 'registry.osism.tech/kolla/release/openvswitch-vswitchd:3.3.0.20241206', 'enabled': True, 'group': 'openvswitch', 'host_in_groups': True, 'privileged': True, 'volumes': ['/etc/kolla/openvswitch-vswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'ovs-appctl version'], 'timeout': '30'}}}) 2025-05-29 00:48:52.412944 | orchestrator | 2025-05-29 00:48:52.412955 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-05-29 00:48:52.412966 | orchestrator | Thursday 29 May 2025 00:48:02 +0000 (0:00:02.530) 0:00:25.775 ********** 2025-05-29 00:48:52.412977 | orchestrator | 2025-05-29 00:48:52.412993 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-05-29 00:48:52.413004 | orchestrator | Thursday 29 May 2025 00:48:03 +0000 (0:00:00.188) 0:00:25.964 ********** 2025-05-29 00:48:52.413014 | orchestrator | 2025-05-29 00:48:52.413024 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-05-29 00:48:52.413033 | orchestrator | Thursday 29 May 2025 00:48:03 +0000 (0:00:00.294) 0:00:26.258 ********** 2025-05-29 00:48:52.413043 | orchestrator | 2025-05-29 00:48:52.413053 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-05-29 00:48:52.413062 | orchestrator | Thursday 29 May 2025 00:48:03 +0000 (0:00:00.108) 0:00:26.367 ********** 2025-05-29 00:48:52.413072 | orchestrator | 2025-05-29 00:48:52.413081 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-05-29 00:48:52.413091 | orchestrator | Thursday 29 May 2025 00:48:03 +0000 (0:00:00.231) 0:00:26.599 ********** 2025-05-29 00:48:52.413101 | orchestrator | 2025-05-29 00:48:52.413110 | orchestrator | TASK [openvswitch : Flush Handlers] ******************************************** 2025-05-29 00:48:52.413120 | orchestrator | Thursday 29 May 2025 00:48:03 +0000 (0:00:00.106) 0:00:26.705 ********** 2025-05-29 00:48:52.413129 | orchestrator | 2025-05-29 00:48:52.413139 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-db-server container] ******** 2025-05-29 00:48:52.413149 | orchestrator | Thursday 29 May 2025 00:48:04 +0000 (0:00:00.276) 0:00:26.982 ********** 2025-05-29 00:48:52.413165 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:48:52.413175 | orchestrator | changed: [testbed-node-2] 2025-05-29 00:48:52.413185 | orchestrator | changed: [testbed-node-3] 2025-05-29 00:48:52.413194 | orchestrator | changed: [testbed-node-1] 2025-05-29 00:48:52.413204 | orchestrator | changed: [testbed-node-4] 2025-05-29 00:48:52.413214 | orchestrator | changed: [testbed-node-5] 2025-05-29 00:48:52.413223 | orchestrator | 2025-05-29 00:48:52.413233 | orchestrator | RUNNING HANDLER [openvswitch : Waiting for openvswitch_db service to be ready] *** 2025-05-29 00:48:52.413243 | orchestrator | Thursday 29 May 2025 00:48:14 +0000 (0:00:10.464) 0:00:37.446 ********** 2025-05-29 00:48:52.413258 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:48:52.413269 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:48:52.413278 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:48:52.413288 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:48:52.413298 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:48:52.413307 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:48:52.413317 | orchestrator | 2025-05-29 00:48:52.413327 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-05-29 00:48:52.413336 | orchestrator | Thursday 29 May 2025 00:48:16 +0000 (0:00:02.201) 0:00:39.648 ********** 2025-05-29 00:48:52.413346 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:48:52.413356 | orchestrator | changed: [testbed-node-2] 2025-05-29 00:48:52.413365 | orchestrator | changed: [testbed-node-1] 2025-05-29 00:48:52.413375 | orchestrator | changed: [testbed-node-3] 2025-05-29 00:48:52.413384 | orchestrator | changed: [testbed-node-4] 2025-05-29 00:48:52.413394 | orchestrator | changed: [testbed-node-5] 2025-05-29 00:48:52.413403 | orchestrator | 2025-05-29 00:48:52.413413 | orchestrator | TASK [openvswitch : Set system-id, hostname and hw-offload] ******************** 2025-05-29 00:48:52.413423 | orchestrator | Thursday 29 May 2025 00:48:27 +0000 (0:00:10.385) 0:00:50.033 ********** 2025-05-29 00:48:52.413433 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-1'}) 2025-05-29 00:48:52.413443 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-2'}) 2025-05-29 00:48:52.413453 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-0'}) 2025-05-29 00:48:52.413462 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-4'}) 2025-05-29 00:48:52.413472 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-3'}) 2025-05-29 00:48:52.413482 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'system-id', 'value': 'testbed-node-5'}) 2025-05-29 00:48:52.413492 | orchestrator | changed: [testbed-node-1] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-1'}) 2025-05-29 00:48:52.413501 | orchestrator | changed: [testbed-node-2] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-2'}) 2025-05-29 00:48:52.413511 | orchestrator | changed: [testbed-node-0] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-0'}) 2025-05-29 00:48:52.413520 | orchestrator | changed: [testbed-node-4] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-4'}) 2025-05-29 00:48:52.413530 | orchestrator | changed: [testbed-node-3] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-3'}) 2025-05-29 00:48:52.413540 | orchestrator | changed: [testbed-node-5] => (item={'col': 'external_ids', 'name': 'hostname', 'value': 'testbed-node-5'}) 2025-05-29 00:48:52.413550 | orchestrator | ok: [testbed-node-1] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-05-29 00:48:52.413559 | orchestrator | ok: [testbed-node-2] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-05-29 00:48:52.413569 | orchestrator | ok: [testbed-node-0] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-05-29 00:48:52.413584 | orchestrator | ok: [testbed-node-4] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-05-29 00:48:52.413598 | orchestrator | ok: [testbed-node-3] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-05-29 00:48:52.413608 | orchestrator | ok: [testbed-node-5] => (item={'col': 'other_config', 'name': 'hw-offload', 'value': True, 'state': 'absent'}) 2025-05-29 00:48:52.413618 | orchestrator | 2025-05-29 00:48:52.413628 | orchestrator | TASK [openvswitch : Ensuring OVS bridge is properly setup] ********************* 2025-05-29 00:48:52.413638 | orchestrator | Thursday 29 May 2025 00:48:34 +0000 (0:00:07.817) 0:00:57.851 ********** 2025-05-29 00:48:52.413648 | orchestrator | skipping: [testbed-node-3] => (item=br-ex)  2025-05-29 00:48:52.413657 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:48:52.413667 | orchestrator | skipping: [testbed-node-4] => (item=br-ex)  2025-05-29 00:48:52.413677 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:48:52.413686 | orchestrator | skipping: [testbed-node-5] => (item=br-ex)  2025-05-29 00:48:52.413696 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:48:52.413705 | orchestrator | changed: [testbed-node-0] => (item=br-ex) 2025-05-29 00:48:52.413715 | orchestrator | changed: [testbed-node-1] => (item=br-ex) 2025-05-29 00:48:52.413780 | orchestrator | changed: [testbed-node-2] => (item=br-ex) 2025-05-29 00:48:52.413790 | orchestrator | 2025-05-29 00:48:52.413800 | orchestrator | TASK [openvswitch : Ensuring OVS ports are properly setup] ********************* 2025-05-29 00:48:52.413809 | orchestrator | Thursday 29 May 2025 00:48:37 +0000 (0:00:02.530) 0:01:00.381 ********** 2025-05-29 00:48:52.413819 | orchestrator | skipping: [testbed-node-3] => (item=['br-ex', 'vxlan0'])  2025-05-29 00:48:52.413829 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:48:52.413839 | orchestrator | skipping: [testbed-node-4] => (item=['br-ex', 'vxlan0'])  2025-05-29 00:48:52.413848 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:48:52.413858 | orchestrator | skipping: [testbed-node-5] => (item=['br-ex', 'vxlan0'])  2025-05-29 00:48:52.413867 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:48:52.413877 | orchestrator | changed: [testbed-node-0] => (item=['br-ex', 'vxlan0']) 2025-05-29 00:48:52.413893 | orchestrator | changed: [testbed-node-1] => (item=['br-ex', 'vxlan0']) 2025-05-29 00:48:52.413904 | orchestrator | changed: [testbed-node-2] => (item=['br-ex', 'vxlan0']) 2025-05-29 00:48:52.413913 | orchestrator | 2025-05-29 00:48:52.413923 | orchestrator | RUNNING HANDLER [openvswitch : Restart openvswitch-vswitchd container] ********* 2025-05-29 00:48:52.413933 | orchestrator | Thursday 29 May 2025 00:48:41 +0000 (0:00:03.729) 0:01:04.110 ********** 2025-05-29 00:48:52.413942 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:48:52.413952 | orchestrator | changed: [testbed-node-1] 2025-05-29 00:48:52.413961 | orchestrator | changed: [testbed-node-2] 2025-05-29 00:48:52.413971 | orchestrator | changed: [testbed-node-3] 2025-05-29 00:48:52.413980 | orchestrator | changed: [testbed-node-4] 2025-05-29 00:48:52.413990 | orchestrator | changed: [testbed-node-5] 2025-05-29 00:48:52.414000 | orchestrator | 2025-05-29 00:48:52.414009 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-29 00:48:52.414065 | orchestrator | testbed-node-0 : ok=17  changed=13  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-29 00:48:52.414078 | orchestrator | testbed-node-1 : ok=17  changed=13  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-29 00:48:52.414088 | orchestrator | testbed-node-2 : ok=17  changed=13  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-29 00:48:52.414098 | orchestrator | testbed-node-3 : ok=15  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-29 00:48:52.414108 | orchestrator | testbed-node-4 : ok=15  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-29 00:48:52.414125 | orchestrator | testbed-node-5 : ok=15  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-29 00:48:52.414135 | orchestrator | 2025-05-29 00:48:52.414145 | orchestrator | 2025-05-29 00:48:52.414154 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-29 00:48:52.414164 | orchestrator | Thursday 29 May 2025 00:48:49 +0000 (0:00:08.390) 0:01:12.501 ********** 2025-05-29 00:48:52.414174 | orchestrator | =============================================================================== 2025-05-29 00:48:52.414183 | orchestrator | openvswitch : Restart openvswitch-vswitchd container ------------------- 18.78s 2025-05-29 00:48:52.414193 | orchestrator | openvswitch : Restart openvswitch-db-server container ------------------ 10.46s 2025-05-29 00:48:52.414203 | orchestrator | openvswitch : Set system-id, hostname and hw-offload -------------------- 7.82s 2025-05-29 00:48:52.414213 | orchestrator | openvswitch : Ensuring OVS ports are properly setup --------------------- 3.73s 2025-05-29 00:48:52.414222 | orchestrator | openvswitch : Copying over start-ovsdb-server files for openvswitch-db-server --- 2.97s 2025-05-29 00:48:52.414232 | orchestrator | openvswitch : Copying over start-ovs file for openvswitch-vswitchd ------ 2.96s 2025-05-29 00:48:52.414241 | orchestrator | openvswitch : Copying over config.json files for services --------------- 2.94s 2025-05-29 00:48:52.414251 | orchestrator | openvswitch : Check openvswitch containers ------------------------------ 2.53s 2025-05-29 00:48:52.414260 | orchestrator | openvswitch : Ensuring OVS bridge is properly setup --------------------- 2.53s 2025-05-29 00:48:52.414270 | orchestrator | module-load : Persist modules via modules-load.d ------------------------ 2.41s 2025-05-29 00:48:52.414280 | orchestrator | openvswitch : include_tasks --------------------------------------------- 2.22s 2025-05-29 00:48:52.414289 | orchestrator | openvswitch : Waiting for openvswitch_db service to be ready ------------ 2.20s 2025-05-29 00:48:52.414303 | orchestrator | openvswitch : Ensuring config directories exist ------------------------- 1.91s 2025-05-29 00:48:52.414313 | orchestrator | module-load : Drop module persistence ----------------------------------- 1.63s 2025-05-29 00:48:52.414323 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.55s 2025-05-29 00:48:52.414332 | orchestrator | module-load : Load modules ---------------------------------------------- 1.44s 2025-05-29 00:48:52.414342 | orchestrator | openvswitch : Copying over ovs-vsctl wrapper ---------------------------- 1.24s 2025-05-29 00:48:52.414351 | orchestrator | openvswitch : Flush Handlers -------------------------------------------- 1.21s 2025-05-29 00:48:52.414361 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.92s 2025-05-29 00:48:52.414371 | orchestrator | openvswitch : Create /run/openvswitch directory on host ----------------- 0.61s 2025-05-29 00:48:52.414385 | orchestrator | 2025-05-29 00:48:52 | INFO  | Task 0c2e12cd-a742-498c-96cf-911eee3b81af is in state STARTED 2025-05-29 00:48:52.414395 | orchestrator | 2025-05-29 00:48:52 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:48:55.462303 | orchestrator | 2025-05-29 00:48:55 | INFO  | Task b4d553e8-d865-463f-8d72-4b12f126d941 is in state STARTED 2025-05-29 00:48:55.466837 | orchestrator | 2025-05-29 00:48:55 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:48:55.467668 | orchestrator | 2025-05-29 00:48:55 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:48:55.468983 | orchestrator | 2025-05-29 00:48:55 | INFO  | Task 27dedc27-f6ad-47d3-9548-105ccdb7a61f is in state STARTED 2025-05-29 00:48:55.469960 | orchestrator | 2025-05-29 00:48:55 | INFO  | Task 0c2e12cd-a742-498c-96cf-911eee3b81af is in state STARTED 2025-05-29 00:48:55.470189 | orchestrator | 2025-05-29 00:48:55 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:48:58.511379 | orchestrator | 2025-05-29 00:48:58 | INFO  | Task b4d553e8-d865-463f-8d72-4b12f126d941 is in state STARTED 2025-05-29 00:48:58.511679 | orchestrator | 2025-05-29 00:48:58 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:48:58.512173 | orchestrator | 2025-05-29 00:48:58 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:48:58.513650 | orchestrator | 2025-05-29 00:48:58 | INFO  | Task 27dedc27-f6ad-47d3-9548-105ccdb7a61f is in state STARTED 2025-05-29 00:48:58.515011 | orchestrator | 2025-05-29 00:48:58 | INFO  | Task 0c2e12cd-a742-498c-96cf-911eee3b81af is in state STARTED 2025-05-29 00:48:58.515038 | orchestrator | 2025-05-29 00:48:58 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:49:01.562821 | orchestrator | 2025-05-29 00:49:01 | INFO  | Task b4d553e8-d865-463f-8d72-4b12f126d941 is in state STARTED 2025-05-29 00:49:01.563293 | orchestrator | 2025-05-29 00:49:01 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:49:01.563999 | orchestrator | 2025-05-29 00:49:01 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:49:01.564840 | orchestrator | 2025-05-29 00:49:01 | INFO  | Task 27dedc27-f6ad-47d3-9548-105ccdb7a61f is in state STARTED 2025-05-29 00:49:01.565591 | orchestrator | 2025-05-29 00:49:01 | INFO  | Task 0c2e12cd-a742-498c-96cf-911eee3b81af is in state STARTED 2025-05-29 00:49:01.565642 | orchestrator | 2025-05-29 00:49:01 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:49:04.606444 | orchestrator | 2025-05-29 00:49:04 | INFO  | Task b4d553e8-d865-463f-8d72-4b12f126d941 is in state STARTED 2025-05-29 00:49:04.609081 | orchestrator | 2025-05-29 00:49:04 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:49:04.610413 | orchestrator | 2025-05-29 00:49:04 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:49:04.610880 | orchestrator | 2025-05-29 00:49:04 | INFO  | Task 27dedc27-f6ad-47d3-9548-105ccdb7a61f is in state STARTED 2025-05-29 00:49:04.611350 | orchestrator | 2025-05-29 00:49:04 | INFO  | Task 0c2e12cd-a742-498c-96cf-911eee3b81af is in state STARTED 2025-05-29 00:49:04.611497 | orchestrator | 2025-05-29 00:49:04 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:49:07.641831 | orchestrator | 2025-05-29 00:49:07 | INFO  | Task b4d553e8-d865-463f-8d72-4b12f126d941 is in state STARTED 2025-05-29 00:49:07.642202 | orchestrator | 2025-05-29 00:49:07 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:49:07.642480 | orchestrator | 2025-05-29 00:49:07 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:49:07.643231 | orchestrator | 2025-05-29 00:49:07 | INFO  | Task 27dedc27-f6ad-47d3-9548-105ccdb7a61f is in state STARTED 2025-05-29 00:49:07.643936 | orchestrator | 2025-05-29 00:49:07 | INFO  | Task 0c2e12cd-a742-498c-96cf-911eee3b81af is in state STARTED 2025-05-29 00:49:07.644936 | orchestrator | 2025-05-29 00:49:07 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:49:10.696343 | orchestrator | 2025-05-29 00:49:10 | INFO  | Task b4d553e8-d865-463f-8d72-4b12f126d941 is in state STARTED 2025-05-29 00:49:10.698479 | orchestrator | 2025-05-29 00:49:10 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:49:10.700925 | orchestrator | 2025-05-29 00:49:10 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:49:10.705164 | orchestrator | 2025-05-29 00:49:10 | INFO  | Task 27dedc27-f6ad-47d3-9548-105ccdb7a61f is in state STARTED 2025-05-29 00:49:10.705221 | orchestrator | 2025-05-29 00:49:10 | INFO  | Task 0c2e12cd-a742-498c-96cf-911eee3b81af is in state STARTED 2025-05-29 00:49:10.705263 | orchestrator | 2025-05-29 00:49:10 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:49:13.758738 | orchestrator | 2025-05-29 00:49:13 | INFO  | Task b4d553e8-d865-463f-8d72-4b12f126d941 is in state STARTED 2025-05-29 00:49:13.760674 | orchestrator | 2025-05-29 00:49:13 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:49:13.762374 | orchestrator | 2025-05-29 00:49:13 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:49:13.764655 | orchestrator | 2025-05-29 00:49:13 | INFO  | Task 27dedc27-f6ad-47d3-9548-105ccdb7a61f is in state STARTED 2025-05-29 00:49:13.765898 | orchestrator | 2025-05-29 00:49:13 | INFO  | Task 0c2e12cd-a742-498c-96cf-911eee3b81af is in state STARTED 2025-05-29 00:49:13.766235 | orchestrator | 2025-05-29 00:49:13 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:49:16.803467 | orchestrator | 2025-05-29 00:49:16 | INFO  | Task b4d553e8-d865-463f-8d72-4b12f126d941 is in state STARTED 2025-05-29 00:49:16.808395 | orchestrator | 2025-05-29 00:49:16 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:49:16.810575 | orchestrator | 2025-05-29 00:49:16 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:49:16.811623 | orchestrator | 2025-05-29 00:49:16 | INFO  | Task 27dedc27-f6ad-47d3-9548-105ccdb7a61f is in state STARTED 2025-05-29 00:49:16.812872 | orchestrator | 2025-05-29 00:49:16 | INFO  | Task 0c2e12cd-a742-498c-96cf-911eee3b81af is in state STARTED 2025-05-29 00:49:16.812894 | orchestrator | 2025-05-29 00:49:16 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:49:19.854824 | orchestrator | 2025-05-29 00:49:19 | INFO  | Task b4d553e8-d865-463f-8d72-4b12f126d941 is in state STARTED 2025-05-29 00:49:19.854913 | orchestrator | 2025-05-29 00:49:19 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:49:19.855141 | orchestrator | 2025-05-29 00:49:19 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:49:19.856289 | orchestrator | 2025-05-29 00:49:19 | INFO  | Task 27dedc27-f6ad-47d3-9548-105ccdb7a61f is in state STARTED 2025-05-29 00:49:19.857283 | orchestrator | 2025-05-29 00:49:19 | INFO  | Task 0c2e12cd-a742-498c-96cf-911eee3b81af is in state STARTED 2025-05-29 00:49:19.857311 | orchestrator | 2025-05-29 00:49:19 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:49:22.896844 | orchestrator | 2025-05-29 00:49:22 | INFO  | Task b4d553e8-d865-463f-8d72-4b12f126d941 is in state STARTED 2025-05-29 00:49:22.897331 | orchestrator | 2025-05-29 00:49:22 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:49:22.898599 | orchestrator | 2025-05-29 00:49:22 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:49:22.899572 | orchestrator | 2025-05-29 00:49:22 | INFO  | Task 27dedc27-f6ad-47d3-9548-105ccdb7a61f is in state STARTED 2025-05-29 00:49:22.900868 | orchestrator | 2025-05-29 00:49:22 | INFO  | Task 0c2e12cd-a742-498c-96cf-911eee3b81af is in state STARTED 2025-05-29 00:49:22.900888 | orchestrator | 2025-05-29 00:49:22 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:49:25.942382 | orchestrator | 2025-05-29 00:49:25 | INFO  | Task b4d553e8-d865-463f-8d72-4b12f126d941 is in state STARTED 2025-05-29 00:49:25.942657 | orchestrator | 2025-05-29 00:49:25 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:49:25.943472 | orchestrator | 2025-05-29 00:49:25 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:49:25.944527 | orchestrator | 2025-05-29 00:49:25 | INFO  | Task 27dedc27-f6ad-47d3-9548-105ccdb7a61f is in state STARTED 2025-05-29 00:49:25.945048 | orchestrator | 2025-05-29 00:49:25 | INFO  | Task 0c2e12cd-a742-498c-96cf-911eee3b81af is in state STARTED 2025-05-29 00:49:25.945164 | orchestrator | 2025-05-29 00:49:25 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:49:28.988204 | orchestrator | 2025-05-29 00:49:28 | INFO  | Task b4d553e8-d865-463f-8d72-4b12f126d941 is in state STARTED 2025-05-29 00:49:28.988434 | orchestrator | 2025-05-29 00:49:28 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:49:28.991825 | orchestrator | 2025-05-29 00:49:28 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:49:28.991852 | orchestrator | 2025-05-29 00:49:28 | INFO  | Task 27dedc27-f6ad-47d3-9548-105ccdb7a61f is in state STARTED 2025-05-29 00:49:28.991864 | orchestrator | 2025-05-29 00:49:28 | INFO  | Task 0c2e12cd-a742-498c-96cf-911eee3b81af is in state STARTED 2025-05-29 00:49:28.991876 | orchestrator | 2025-05-29 00:49:28 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:49:32.037022 | orchestrator | 2025-05-29 00:49:32 | INFO  | Task b4d553e8-d865-463f-8d72-4b12f126d941 is in state STARTED 2025-05-29 00:49:32.037825 | orchestrator | 2025-05-29 00:49:32 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:49:32.038666 | orchestrator | 2025-05-29 00:49:32 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:49:32.040037 | orchestrator | 2025-05-29 00:49:32 | INFO  | Task 27dedc27-f6ad-47d3-9548-105ccdb7a61f is in state STARTED 2025-05-29 00:49:32.042111 | orchestrator | 2025-05-29 00:49:32 | INFO  | Task 0c2e12cd-a742-498c-96cf-911eee3b81af is in state STARTED 2025-05-29 00:49:32.042142 | orchestrator | 2025-05-29 00:49:32 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:49:35.096094 | orchestrator | 2025-05-29 00:49:35 | INFO  | Task b4d553e8-d865-463f-8d72-4b12f126d941 is in state STARTED 2025-05-29 00:49:35.097406 | orchestrator | 2025-05-29 00:49:35 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:49:35.100292 | orchestrator | 2025-05-29 00:49:35 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:49:35.102386 | orchestrator | 2025-05-29 00:49:35 | INFO  | Task 27dedc27-f6ad-47d3-9548-105ccdb7a61f is in state STARTED 2025-05-29 00:49:35.105964 | orchestrator | 2025-05-29 00:49:35 | INFO  | Task 0c2e12cd-a742-498c-96cf-911eee3b81af is in state STARTED 2025-05-29 00:49:35.105995 | orchestrator | 2025-05-29 00:49:35 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:49:38.146735 | orchestrator | 2025-05-29 00:49:38 | INFO  | Task b4d553e8-d865-463f-8d72-4b12f126d941 is in state STARTED 2025-05-29 00:49:38.149126 | orchestrator | 2025-05-29 00:49:38 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:49:38.150097 | orchestrator | 2025-05-29 00:49:38 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:49:38.150927 | orchestrator | 2025-05-29 00:49:38 | INFO  | Task 27dedc27-f6ad-47d3-9548-105ccdb7a61f is in state STARTED 2025-05-29 00:49:38.152439 | orchestrator | 2025-05-29 00:49:38 | INFO  | Task 0c2e12cd-a742-498c-96cf-911eee3b81af is in state STARTED 2025-05-29 00:49:38.152516 | orchestrator | 2025-05-29 00:49:38 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:49:41.201382 | orchestrator | 2025-05-29 00:49:41 | INFO  | Task b4d553e8-d865-463f-8d72-4b12f126d941 is in state STARTED 2025-05-29 00:49:41.202338 | orchestrator | 2025-05-29 00:49:41 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:49:41.202893 | orchestrator | 2025-05-29 00:49:41 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:49:41.203655 | orchestrator | 2025-05-29 00:49:41 | INFO  | Task 27dedc27-f6ad-47d3-9548-105ccdb7a61f is in state STARTED 2025-05-29 00:49:41.207923 | orchestrator | 2025-05-29 00:49:41 | INFO  | Task 0c2e12cd-a742-498c-96cf-911eee3b81af is in state STARTED 2025-05-29 00:49:41.207966 | orchestrator | 2025-05-29 00:49:41 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:49:44.254190 | orchestrator | 2025-05-29 00:49:44 | INFO  | Task b4d553e8-d865-463f-8d72-4b12f126d941 is in state STARTED 2025-05-29 00:49:44.255196 | orchestrator | 2025-05-29 00:49:44 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:49:44.257300 | orchestrator | 2025-05-29 00:49:44 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:49:44.258317 | orchestrator | 2025-05-29 00:49:44 | INFO  | Task 27dedc27-f6ad-47d3-9548-105ccdb7a61f is in state STARTED 2025-05-29 00:49:44.261133 | orchestrator | 2025-05-29 00:49:44 | INFO  | Task 0c2e12cd-a742-498c-96cf-911eee3b81af is in state STARTED 2025-05-29 00:49:44.261166 | orchestrator | 2025-05-29 00:49:44 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:49:47.299616 | orchestrator | 2025-05-29 00:49:47 | INFO  | Task b4d553e8-d865-463f-8d72-4b12f126d941 is in state STARTED 2025-05-29 00:49:47.300472 | orchestrator | 2025-05-29 00:49:47 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:49:47.301011 | orchestrator | 2025-05-29 00:49:47 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:49:47.303634 | orchestrator | 2025-05-29 00:49:47 | INFO  | Task 27dedc27-f6ad-47d3-9548-105ccdb7a61f is in state STARTED 2025-05-29 00:49:47.304493 | orchestrator | 2025-05-29 00:49:47 | INFO  | Task 0c2e12cd-a742-498c-96cf-911eee3b81af is in state STARTED 2025-05-29 00:49:47.304520 | orchestrator | 2025-05-29 00:49:47 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:49:50.355742 | orchestrator | 2025-05-29 00:49:50 | INFO  | Task b4d553e8-d865-463f-8d72-4b12f126d941 is in state STARTED 2025-05-29 00:49:50.357387 | orchestrator | 2025-05-29 00:49:50 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:49:50.363081 | orchestrator | 2025-05-29 00:49:50 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:49:50.363566 | orchestrator | 2025-05-29 00:49:50 | INFO  | Task 27dedc27-f6ad-47d3-9548-105ccdb7a61f is in state STARTED 2025-05-29 00:49:50.366957 | orchestrator | 2025-05-29 00:49:50 | INFO  | Task 0c2e12cd-a742-498c-96cf-911eee3b81af is in state STARTED 2025-05-29 00:49:50.366990 | orchestrator | 2025-05-29 00:49:50 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:49:53.393990 | orchestrator | 2025-05-29 00:49:53 | INFO  | Task b4d553e8-d865-463f-8d72-4b12f126d941 is in state STARTED 2025-05-29 00:49:53.394604 | orchestrator | 2025-05-29 00:49:53 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:49:53.396331 | orchestrator | 2025-05-29 00:49:53 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:49:53.396357 | orchestrator | 2025-05-29 00:49:53 | INFO  | Task 27dedc27-f6ad-47d3-9548-105ccdb7a61f is in state STARTED 2025-05-29 00:49:53.396855 | orchestrator | 2025-05-29 00:49:53 | INFO  | Task 0c2e12cd-a742-498c-96cf-911eee3b81af is in state STARTED 2025-05-29 00:49:53.396906 | orchestrator | 2025-05-29 00:49:53 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:49:56.435246 | orchestrator | 2025-05-29 00:49:56 | INFO  | Task b4d553e8-d865-463f-8d72-4b12f126d941 is in state STARTED 2025-05-29 00:49:56.435471 | orchestrator | 2025-05-29 00:49:56 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:49:56.436846 | orchestrator | 2025-05-29 00:49:56 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:49:56.437302 | orchestrator | 2025-05-29 00:49:56 | INFO  | Task 27dedc27-f6ad-47d3-9548-105ccdb7a61f is in state STARTED 2025-05-29 00:49:56.438127 | orchestrator | 2025-05-29 00:49:56 | INFO  | Task 0c2e12cd-a742-498c-96cf-911eee3b81af is in state STARTED 2025-05-29 00:49:56.438160 | orchestrator | 2025-05-29 00:49:56 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:49:59.461096 | orchestrator | 2025-05-29 00:49:59 | INFO  | Task b4d553e8-d865-463f-8d72-4b12f126d941 is in state STARTED 2025-05-29 00:49:59.461290 | orchestrator | 2025-05-29 00:49:59 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:49:59.461860 | orchestrator | 2025-05-29 00:49:59 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:49:59.462472 | orchestrator | 2025-05-29 00:49:59 | INFO  | Task 27dedc27-f6ad-47d3-9548-105ccdb7a61f is in state STARTED 2025-05-29 00:49:59.463326 | orchestrator | 2025-05-29 00:49:59 | INFO  | Task 0c2e12cd-a742-498c-96cf-911eee3b81af is in state STARTED 2025-05-29 00:49:59.463351 | orchestrator | 2025-05-29 00:49:59 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:50:02.494852 | orchestrator | 2025-05-29 00:50:02 | INFO  | Task b4d553e8-d865-463f-8d72-4b12f126d941 is in state STARTED 2025-05-29 00:50:02.495971 | orchestrator | 2025-05-29 00:50:02 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:50:02.498778 | orchestrator | 2025-05-29 00:50:02 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:50:02.501015 | orchestrator | 2025-05-29 00:50:02 | INFO  | Task 27dedc27-f6ad-47d3-9548-105ccdb7a61f is in state STARTED 2025-05-29 00:50:02.503522 | orchestrator | 2025-05-29 00:50:02 | INFO  | Task 0c2e12cd-a742-498c-96cf-911eee3b81af is in state STARTED 2025-05-29 00:50:02.503575 | orchestrator | 2025-05-29 00:50:02 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:50:05.539593 | orchestrator | 2025-05-29 00:50:05 | INFO  | Task b4d553e8-d865-463f-8d72-4b12f126d941 is in state STARTED 2025-05-29 00:50:05.539881 | orchestrator | 2025-05-29 00:50:05 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:50:05.540587 | orchestrator | 2025-05-29 00:50:05 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:50:05.541312 | orchestrator | 2025-05-29 00:50:05 | INFO  | Task 27dedc27-f6ad-47d3-9548-105ccdb7a61f is in state STARTED 2025-05-29 00:50:05.541870 | orchestrator | 2025-05-29 00:50:05 | INFO  | Task 0c2e12cd-a742-498c-96cf-911eee3b81af is in state STARTED 2025-05-29 00:50:05.542137 | orchestrator | 2025-05-29 00:50:05 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:50:08.571546 | orchestrator | 2025-05-29 00:50:08.571753 | orchestrator | 2025-05-29 00:50:08.571775 | orchestrator | PLAY [Set kolla_action_rabbitmq] *********************************************** 2025-05-29 00:50:08.571788 | orchestrator | 2025-05-29 00:50:08.571800 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-05-29 00:50:08.571812 | orchestrator | Thursday 29 May 2025 00:47:58 +0000 (0:00:00.120) 0:00:00.121 ********** 2025-05-29 00:50:08.571824 | orchestrator | ok: [localhost] => { 2025-05-29 00:50:08.571862 | orchestrator |  "msg": "The task 'Check RabbitMQ service' fails if the RabbitMQ service has not yet been deployed. This is fine." 2025-05-29 00:50:08.571874 | orchestrator | } 2025-05-29 00:50:08.571886 | orchestrator | 2025-05-29 00:50:08.571898 | orchestrator | TASK [Check RabbitMQ service] ************************************************** 2025-05-29 00:50:08.571909 | orchestrator | Thursday 29 May 2025 00:47:58 +0000 (0:00:00.032) 0:00:00.153 ********** 2025-05-29 00:50:08.571922 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string RabbitMQ Management in 192.168.16.9:15672"} 2025-05-29 00:50:08.571934 | orchestrator | ...ignoring 2025-05-29 00:50:08.571945 | orchestrator | 2025-05-29 00:50:08.571957 | orchestrator | TASK [Set kolla_action_rabbitmq = upgrade if RabbitMQ is already running] ****** 2025-05-29 00:50:08.571968 | orchestrator | Thursday 29 May 2025 00:48:00 +0000 (0:00:02.499) 0:00:02.653 ********** 2025-05-29 00:50:08.571979 | orchestrator | skipping: [localhost] 2025-05-29 00:50:08.571991 | orchestrator | 2025-05-29 00:50:08.572002 | orchestrator | TASK [Set kolla_action_rabbitmq = kolla_action_ng] ***************************** 2025-05-29 00:50:08.572013 | orchestrator | Thursday 29 May 2025 00:48:00 +0000 (0:00:00.072) 0:00:02.725 ********** 2025-05-29 00:50:08.572024 | orchestrator | ok: [localhost] 2025-05-29 00:50:08.572035 | orchestrator | 2025-05-29 00:50:08.572046 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-29 00:50:08.572057 | orchestrator | 2025-05-29 00:50:08.572068 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-29 00:50:08.572080 | orchestrator | Thursday 29 May 2025 00:48:01 +0000 (0:00:00.193) 0:00:02.918 ********** 2025-05-29 00:50:08.572094 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:50:08.572107 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:50:08.572119 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:50:08.572131 | orchestrator | 2025-05-29 00:50:08.572144 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-29 00:50:08.572157 | orchestrator | Thursday 29 May 2025 00:48:01 +0000 (0:00:00.454) 0:00:03.373 ********** 2025-05-29 00:50:08.572170 | orchestrator | ok: [testbed-node-1] => (item=enable_rabbitmq_True) 2025-05-29 00:50:08.572184 | orchestrator | ok: [testbed-node-0] => (item=enable_rabbitmq_True) 2025-05-29 00:50:08.572196 | orchestrator | ok: [testbed-node-2] => (item=enable_rabbitmq_True) 2025-05-29 00:50:08.572209 | orchestrator | 2025-05-29 00:50:08.572221 | orchestrator | PLAY [Apply role rabbitmq] ***************************************************** 2025-05-29 00:50:08.572234 | orchestrator | 2025-05-29 00:50:08.572247 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-05-29 00:50:08.572260 | orchestrator | Thursday 29 May 2025 00:48:02 +0000 (0:00:00.594) 0:00:03.967 ********** 2025-05-29 00:50:08.572274 | orchestrator | included: /ansible/roles/rabbitmq/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-29 00:50:08.572287 | orchestrator | 2025-05-29 00:50:08.572299 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-05-29 00:50:08.572312 | orchestrator | Thursday 29 May 2025 00:48:02 +0000 (0:00:00.635) 0:00:04.603 ********** 2025-05-29 00:50:08.572325 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:50:08.572385 | orchestrator | 2025-05-29 00:50:08.572399 | orchestrator | TASK [rabbitmq : Get current RabbitMQ version] ********************************* 2025-05-29 00:50:08.572412 | orchestrator | Thursday 29 May 2025 00:48:03 +0000 (0:00:01.119) 0:00:05.722 ********** 2025-05-29 00:50:08.572425 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:50:08.572439 | orchestrator | 2025-05-29 00:50:08.572450 | orchestrator | TASK [rabbitmq : Get new RabbitMQ version] ************************************* 2025-05-29 00:50:08.572461 | orchestrator | Thursday 29 May 2025 00:48:04 +0000 (0:00:00.322) 0:00:06.044 ********** 2025-05-29 00:50:08.572473 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:50:08.572483 | orchestrator | 2025-05-29 00:50:08.572494 | orchestrator | TASK [rabbitmq : Check if running RabbitMQ is at most one version behind] ****** 2025-05-29 00:50:08.572520 | orchestrator | Thursday 29 May 2025 00:48:05 +0000 (0:00:01.294) 0:00:07.339 ********** 2025-05-29 00:50:08.572540 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:50:08.572551 | orchestrator | 2025-05-29 00:50:08.572562 | orchestrator | TASK [rabbitmq : Catch when RabbitMQ is being downgraded] ********************** 2025-05-29 00:50:08.572573 | orchestrator | Thursday 29 May 2025 00:48:06 +0000 (0:00:00.455) 0:00:07.794 ********** 2025-05-29 00:50:08.572584 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:50:08.572595 | orchestrator | 2025-05-29 00:50:08.572606 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-05-29 00:50:08.572617 | orchestrator | Thursday 29 May 2025 00:48:06 +0000 (0:00:00.321) 0:00:08.116 ********** 2025-05-29 00:50:08.572654 | orchestrator | included: /ansible/roles/rabbitmq/tasks/remove-ha-all-policy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-29 00:50:08.572666 | orchestrator | 2025-05-29 00:50:08.572676 | orchestrator | TASK [rabbitmq : Get container facts] ****************************************** 2025-05-29 00:50:08.572687 | orchestrator | Thursday 29 May 2025 00:48:07 +0000 (0:00:00.824) 0:00:08.940 ********** 2025-05-29 00:50:08.572698 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:50:08.572709 | orchestrator | 2025-05-29 00:50:08.572720 | orchestrator | TASK [rabbitmq : List RabbitMQ policies] *************************************** 2025-05-29 00:50:08.572731 | orchestrator | Thursday 29 May 2025 00:48:08 +0000 (0:00:00.916) 0:00:09.856 ********** 2025-05-29 00:50:08.572741 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:50:08.572752 | orchestrator | 2025-05-29 00:50:08.572763 | orchestrator | TASK [rabbitmq : Remove ha-all policy from RabbitMQ] *************************** 2025-05-29 00:50:08.572774 | orchestrator | Thursday 29 May 2025 00:48:08 +0000 (0:00:00.372) 0:00:10.229 ********** 2025-05-29 00:50:08.572785 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:50:08.572796 | orchestrator | 2025-05-29 00:50:08.572824 | orchestrator | TASK [rabbitmq : Ensuring config directories exist] **************************** 2025-05-29 00:50:08.572835 | orchestrator | Thursday 29 May 2025 00:48:08 +0000 (0:00:00.497) 0:00:10.727 ********** 2025-05-29 00:50:08.572851 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-29 00:50:08.572867 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-29 00:50:08.572894 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-29 00:50:08.572907 | orchestrator | 2025-05-29 00:50:08.572918 | orchestrator | TASK [rabbitmq : Copying over config.json files for services] ****************** 2025-05-29 00:50:08.572929 | orchestrator | Thursday 29 May 2025 00:48:10 +0000 (0:00:01.096) 0:00:11.823 ********** 2025-05-29 00:50:08.572951 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-29 00:50:08.572965 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-29 00:50:08.572977 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-29 00:50:08.572999 | orchestrator | 2025-05-29 00:50:08.573010 | orchestrator | TASK [rabbitmq : Copying over rabbitmq-env.conf] ******************************* 2025-05-29 00:50:08.573021 | orchestrator | Thursday 29 May 2025 00:48:12 +0000 (0:00:02.275) 0:00:14.099 ********** 2025-05-29 00:50:08.573032 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-05-29 00:50:08.573048 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-05-29 00:50:08.573059 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq-env.conf.j2) 2025-05-29 00:50:08.573070 | orchestrator | 2025-05-29 00:50:08.573081 | orchestrator | TASK [rabbitmq : Copying over rabbitmq.conf] *********************************** 2025-05-29 00:50:08.573091 | orchestrator | Thursday 29 May 2025 00:48:13 +0000 (0:00:01.543) 0:00:15.643 ********** 2025-05-29 00:50:08.573102 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-05-29 00:50:08.573113 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-05-29 00:50:08.573124 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/rabbitmq.conf.j2) 2025-05-29 00:50:08.573138 | orchestrator | 2025-05-29 00:50:08.573157 | orchestrator | TASK [rabbitmq : Copying over erl_inetrc] ************************************** 2025-05-29 00:50:08.573176 | orchestrator | Thursday 29 May 2025 00:48:16 +0000 (0:00:02.586) 0:00:18.229 ********** 2025-05-29 00:50:08.573195 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-05-29 00:50:08.573213 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-05-29 00:50:08.573230 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/erl_inetrc.j2) 2025-05-29 00:50:08.573248 | orchestrator | 2025-05-29 00:50:08.573274 | orchestrator | TASK [rabbitmq : Copying over advanced.config] ********************************* 2025-05-29 00:50:08.573295 | orchestrator | Thursday 29 May 2025 00:48:18 +0000 (0:00:01.949) 0:00:20.178 ********** 2025-05-29 00:50:08.573313 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-05-29 00:50:08.573331 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-05-29 00:50:08.573343 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/advanced.config.j2) 2025-05-29 00:50:08.573354 | orchestrator | 2025-05-29 00:50:08.573365 | orchestrator | TASK [rabbitmq : Copying over definitions.json] ******************************** 2025-05-29 00:50:08.573376 | orchestrator | Thursday 29 May 2025 00:48:21 +0000 (0:00:02.657) 0:00:22.836 ********** 2025-05-29 00:50:08.573387 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-05-29 00:50:08.573398 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-05-29 00:50:08.573408 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/definitions.json.j2) 2025-05-29 00:50:08.573419 | orchestrator | 2025-05-29 00:50:08.573430 | orchestrator | TASK [rabbitmq : Copying over enabled_plugins] ********************************* 2025-05-29 00:50:08.573441 | orchestrator | Thursday 29 May 2025 00:48:22 +0000 (0:00:01.310) 0:00:24.146 ********** 2025-05-29 00:50:08.573461 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-05-29 00:50:08.573472 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-05-29 00:50:08.573483 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/rabbitmq/templates/enabled_plugins.j2) 2025-05-29 00:50:08.573494 | orchestrator | 2025-05-29 00:50:08.573505 | orchestrator | TASK [rabbitmq : include_tasks] ************************************************ 2025-05-29 00:50:08.573516 | orchestrator | Thursday 29 May 2025 00:48:24 +0000 (0:00:01.608) 0:00:25.755 ********** 2025-05-29 00:50:08.573527 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:50:08.573538 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:50:08.573549 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:50:08.573560 | orchestrator | 2025-05-29 00:50:08.573571 | orchestrator | TASK [rabbitmq : Check rabbitmq containers] ************************************ 2025-05-29 00:50:08.573582 | orchestrator | Thursday 29 May 2025 00:48:24 +0000 (0:00:00.692) 0:00:26.448 ********** 2025-05-29 00:50:08.573594 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-29 00:50:08.573613 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-29 00:50:08.573662 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': 'rabbitmq', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': 'zdd6geSBXefcI7IoHnP1U1fxtRWS3u5QtnPCvQTT', 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-29 00:50:08.573696 | orchestrator | 2025-05-29 00:50:08.573714 | orchestrator | TASK [rabbitmq : Creating rabbitmq volume] ************************************* 2025-05-29 00:50:08.573734 | orchestrator | Thursday 29 May 2025 00:48:26 +0000 (0:00:01.563) 0:00:28.011 ********** 2025-05-29 00:50:08.573752 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:50:08.573770 | orchestrator | changed: [testbed-node-1] 2025-05-29 00:50:08.573789 | orchestrator | changed: [testbed-node-2] 2025-05-29 00:50:08.573801 | orchestrator | 2025-05-29 00:50:08.573812 | orchestrator | TASK [rabbitmq : Running RabbitMQ bootstrap container] ************************* 2025-05-29 00:50:08.573822 | orchestrator | Thursday 29 May 2025 00:48:27 +0000 (0:00:01.116) 0:00:29.127 ********** 2025-05-29 00:50:08.573842 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:50:08.573861 | orchestrator | changed: [testbed-node-2] 2025-05-29 00:50:08.573879 | orchestrator | changed: [testbed-node-1] 2025-05-29 00:50:08.573896 | orchestrator | 2025-05-29 00:50:08.573914 | orchestrator | RUNNING HANDLER [rabbitmq : Restart rabbitmq container] ************************ 2025-05-29 00:50:08.573934 | orchestrator | Thursday 29 May 2025 00:48:34 +0000 (0:00:06.634) 0:00:35.762 ********** 2025-05-29 00:50:08.573951 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:50:08.573965 | orchestrator | changed: [testbed-node-1] 2025-05-29 00:50:08.573984 | orchestrator | changed: [testbed-node-2] 2025-05-29 00:50:08.574003 | orchestrator | 2025-05-29 00:50:08.574137 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-05-29 00:50:08.574159 | orchestrator | 2025-05-29 00:50:08.574179 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-05-29 00:50:08.574197 | orchestrator | Thursday 29 May 2025 00:48:34 +0000 (0:00:00.328) 0:00:36.090 ********** 2025-05-29 00:50:08.574217 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:50:08.574236 | orchestrator | 2025-05-29 00:50:08.574255 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-05-29 00:50:08.574275 | orchestrator | Thursday 29 May 2025 00:48:34 +0000 (0:00:00.559) 0:00:36.650 ********** 2025-05-29 00:50:08.574293 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:50:08.574313 | orchestrator | 2025-05-29 00:50:08.574330 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-05-29 00:50:08.574349 | orchestrator | Thursday 29 May 2025 00:48:35 +0000 (0:00:00.708) 0:00:37.358 ********** 2025-05-29 00:50:08.574367 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:50:08.574385 | orchestrator | 2025-05-29 00:50:08.574404 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-05-29 00:50:08.574422 | orchestrator | Thursday 29 May 2025 00:48:37 +0000 (0:00:01.667) 0:00:39.026 ********** 2025-05-29 00:50:08.574439 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:50:08.574456 | orchestrator | 2025-05-29 00:50:08.574473 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-05-29 00:50:08.574492 | orchestrator | 2025-05-29 00:50:08.574509 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-05-29 00:50:08.574526 | orchestrator | Thursday 29 May 2025 00:49:29 +0000 (0:00:52.437) 0:01:31.464 ********** 2025-05-29 00:50:08.574544 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:50:08.574562 | orchestrator | 2025-05-29 00:50:08.574579 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-05-29 00:50:08.574596 | orchestrator | Thursday 29 May 2025 00:49:30 +0000 (0:00:00.832) 0:01:32.296 ********** 2025-05-29 00:50:08.574614 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:50:08.574676 | orchestrator | 2025-05-29 00:50:08.574695 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-05-29 00:50:08.574714 | orchestrator | Thursday 29 May 2025 00:49:30 +0000 (0:00:00.227) 0:01:32.523 ********** 2025-05-29 00:50:08.574731 | orchestrator | changed: [testbed-node-1] 2025-05-29 00:50:08.574765 | orchestrator | 2025-05-29 00:50:08.574784 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-05-29 00:50:08.574802 | orchestrator | Thursday 29 May 2025 00:49:37 +0000 (0:00:06.697) 0:01:39.221 ********** 2025-05-29 00:50:08.574822 | orchestrator | changed: [testbed-node-1] 2025-05-29 00:50:08.574840 | orchestrator | 2025-05-29 00:50:08.574858 | orchestrator | PLAY [Restart rabbitmq services] *********************************************** 2025-05-29 00:50:08.574876 | orchestrator | 2025-05-29 00:50:08.574895 | orchestrator | TASK [rabbitmq : Get info on RabbitMQ container] ******************************* 2025-05-29 00:50:08.574913 | orchestrator | Thursday 29 May 2025 00:49:48 +0000 (0:00:10.859) 0:01:50.080 ********** 2025-05-29 00:50:08.574932 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:50:08.574950 | orchestrator | 2025-05-29 00:50:08.574968 | orchestrator | TASK [rabbitmq : Put RabbitMQ node into maintenance mode] ********************** 2025-05-29 00:50:08.574988 | orchestrator | Thursday 29 May 2025 00:49:48 +0000 (0:00:00.652) 0:01:50.732 ********** 2025-05-29 00:50:08.575006 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:50:08.575024 | orchestrator | 2025-05-29 00:50:08.575042 | orchestrator | TASK [rabbitmq : Restart rabbitmq container] *********************************** 2025-05-29 00:50:08.575078 | orchestrator | Thursday 29 May 2025 00:49:49 +0000 (0:00:00.384) 0:01:51.116 ********** 2025-05-29 00:50:08.575099 | orchestrator | changed: [testbed-node-2] 2025-05-29 00:50:08.575118 | orchestrator | 2025-05-29 00:50:08.575137 | orchestrator | TASK [rabbitmq : Waiting for rabbitmq to start] ******************************** 2025-05-29 00:50:08.575155 | orchestrator | Thursday 29 May 2025 00:49:56 +0000 (0:00:06.720) 0:01:57.837 ********** 2025-05-29 00:50:08.575175 | orchestrator | changed: [testbed-node-2] 2025-05-29 00:50:08.575194 | orchestrator | 2025-05-29 00:50:08.575213 | orchestrator | PLAY [Apply rabbitmq post-configuration] *************************************** 2025-05-29 00:50:08.575231 | orchestrator | 2025-05-29 00:50:08.575249 | orchestrator | TASK [Include rabbitmq post-deploy.yml] **************************************** 2025-05-29 00:50:08.575268 | orchestrator | Thursday 29 May 2025 00:50:04 +0000 (0:00:08.388) 0:02:06.225 ********** 2025-05-29 00:50:08.575287 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-29 00:50:08.575306 | orchestrator | 2025-05-29 00:50:08.575325 | orchestrator | TASK [rabbitmq : Enable all stable feature flags] ****************************** 2025-05-29 00:50:08.575343 | orchestrator | Thursday 29 May 2025 00:50:05 +0000 (0:00:00.529) 0:02:06.755 ********** 2025-05-29 00:50:08.576167 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-05-29 00:50:08.576212 | orchestrator | enable_outward_rabbitmq_True 2025-05-29 00:50:08.576224 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: 2025-05-29 00:50:08.576235 | orchestrator | outward_rabbitmq_restart 2025-05-29 00:50:08.576246 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:50:08.576258 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:50:08.576269 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:50:08.576280 | orchestrator | 2025-05-29 00:50:08.576291 | orchestrator | PLAY [Apply role rabbitmq (outward)] ******************************************* 2025-05-29 00:50:08.576302 | orchestrator | skipping: no hosts matched 2025-05-29 00:50:08.576313 | orchestrator | 2025-05-29 00:50:08.576324 | orchestrator | PLAY [Restart rabbitmq (outward) services] ************************************* 2025-05-29 00:50:08.576334 | orchestrator | skipping: no hosts matched 2025-05-29 00:50:08.576345 | orchestrator | 2025-05-29 00:50:08.576356 | orchestrator | PLAY [Apply rabbitmq (outward) post-configuration] ***************************** 2025-05-29 00:50:08.576367 | orchestrator | skipping: no hosts matched 2025-05-29 00:50:08.576377 | orchestrator | 2025-05-29 00:50:08.576388 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-29 00:50:08.576400 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-05-29 00:50:08.576412 | orchestrator | testbed-node-0 : ok=23  changed=14  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-05-29 00:50:08.576440 | orchestrator | testbed-node-1 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-29 00:50:08.576451 | orchestrator | testbed-node-2 : ok=21  changed=14  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-29 00:50:08.576461 | orchestrator | 2025-05-29 00:50:08.576472 | orchestrator | 2025-05-29 00:50:08.576483 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-29 00:50:08.576494 | orchestrator | Thursday 29 May 2025 00:50:07 +0000 (0:00:02.355) 0:02:09.112 ********** 2025-05-29 00:50:08.576505 | orchestrator | =============================================================================== 2025-05-29 00:50:08.576516 | orchestrator | rabbitmq : Waiting for rabbitmq to start ------------------------------- 71.69s 2025-05-29 00:50:08.576526 | orchestrator | rabbitmq : Restart rabbitmq container ---------------------------------- 15.09s 2025-05-29 00:50:08.576537 | orchestrator | rabbitmq : Running RabbitMQ bootstrap container ------------------------- 6.63s 2025-05-29 00:50:08.576548 | orchestrator | rabbitmq : Copying over advanced.config --------------------------------- 2.66s 2025-05-29 00:50:08.576559 | orchestrator | rabbitmq : Copying over rabbitmq.conf ----------------------------------- 2.59s 2025-05-29 00:50:08.576570 | orchestrator | Check RabbitMQ service -------------------------------------------------- 2.50s 2025-05-29 00:50:08.576580 | orchestrator | rabbitmq : Enable all stable feature flags ------------------------------ 2.36s 2025-05-29 00:50:08.576591 | orchestrator | rabbitmq : Copying over config.json files for services ------------------ 2.28s 2025-05-29 00:50:08.576602 | orchestrator | rabbitmq : Get info on RabbitMQ container ------------------------------- 2.04s 2025-05-29 00:50:08.576612 | orchestrator | rabbitmq : Copying over erl_inetrc -------------------------------------- 1.95s 2025-05-29 00:50:08.576653 | orchestrator | rabbitmq : Copying over enabled_plugins --------------------------------- 1.61s 2025-05-29 00:50:08.576674 | orchestrator | rabbitmq : Check rabbitmq containers ------------------------------------ 1.56s 2025-05-29 00:50:08.576687 | orchestrator | rabbitmq : Copying over rabbitmq-env.conf ------------------------------- 1.54s 2025-05-29 00:50:08.576697 | orchestrator | rabbitmq : Put RabbitMQ node into maintenance mode ---------------------- 1.32s 2025-05-29 00:50:08.576708 | orchestrator | rabbitmq : Copying over definitions.json -------------------------------- 1.31s 2025-05-29 00:50:08.576719 | orchestrator | rabbitmq : Get new RabbitMQ version ------------------------------------- 1.29s 2025-05-29 00:50:08.576730 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 1.12s 2025-05-29 00:50:08.576741 | orchestrator | rabbitmq : Creating rabbitmq volume ------------------------------------- 1.12s 2025-05-29 00:50:08.576751 | orchestrator | rabbitmq : Ensuring config directories exist ---------------------------- 1.10s 2025-05-29 00:50:08.576762 | orchestrator | rabbitmq : Get container facts ------------------------------------------ 0.92s 2025-05-29 00:50:08.576792 | orchestrator | 2025-05-29 00:50:08 | INFO  | Task b4d553e8-d865-463f-8d72-4b12f126d941 is in state SUCCESS 2025-05-29 00:50:08.576804 | orchestrator | 2025-05-29 00:50:08 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:50:08.576815 | orchestrator | 2025-05-29 00:50:08 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:50:08.576826 | orchestrator | 2025-05-29 00:50:08 | INFO  | Task 27dedc27-f6ad-47d3-9548-105ccdb7a61f is in state STARTED 2025-05-29 00:50:08.576837 | orchestrator | 2025-05-29 00:50:08 | INFO  | Task 0c2e12cd-a742-498c-96cf-911eee3b81af is in state STARTED 2025-05-29 00:50:08.576848 | orchestrator | 2025-05-29 00:50:08 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:50:11.620397 | orchestrator | 2025-05-29 00:50:11 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:50:11.621401 | orchestrator | 2025-05-29 00:50:11 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:50:11.622907 | orchestrator | 2025-05-29 00:50:11 | INFO  | Task 27dedc27-f6ad-47d3-9548-105ccdb7a61f is in state STARTED 2025-05-29 00:50:11.626953 | orchestrator | 2025-05-29 00:50:11 | INFO  | Task 0c2e12cd-a742-498c-96cf-911eee3b81af is in state STARTED 2025-05-29 00:50:11.626985 | orchestrator | 2025-05-29 00:50:11 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:50:14.676957 | orchestrator | 2025-05-29 00:50:14 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:50:14.678709 | orchestrator | 2025-05-29 00:50:14 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:50:14.681213 | orchestrator | 2025-05-29 00:50:14 | INFO  | Task 27dedc27-f6ad-47d3-9548-105ccdb7a61f is in state STARTED 2025-05-29 00:50:14.685789 | orchestrator | 2025-05-29 00:50:14 | INFO  | Task 0c2e12cd-a742-498c-96cf-911eee3b81af is in state STARTED 2025-05-29 00:50:14.685857 | orchestrator | 2025-05-29 00:50:14 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:50:17.729476 | orchestrator | 2025-05-29 00:50:17 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:50:17.729862 | orchestrator | 2025-05-29 00:50:17 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:50:17.730671 | orchestrator | 2025-05-29 00:50:17 | INFO  | Task 27dedc27-f6ad-47d3-9548-105ccdb7a61f is in state STARTED 2025-05-29 00:50:17.732115 | orchestrator | 2025-05-29 00:50:17 | INFO  | Task 0c2e12cd-a742-498c-96cf-911eee3b81af is in state STARTED 2025-05-29 00:50:17.732154 | orchestrator | 2025-05-29 00:50:17 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:50:20.778976 | orchestrator | 2025-05-29 00:50:20 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:50:20.779464 | orchestrator | 2025-05-29 00:50:20 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:50:20.780234 | orchestrator | 2025-05-29 00:50:20 | INFO  | Task 27dedc27-f6ad-47d3-9548-105ccdb7a61f is in state STARTED 2025-05-29 00:50:20.781029 | orchestrator | 2025-05-29 00:50:20 | INFO  | Task 0c2e12cd-a742-498c-96cf-911eee3b81af is in state STARTED 2025-05-29 00:50:20.781838 | orchestrator | 2025-05-29 00:50:20 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:50:23.816038 | orchestrator | 2025-05-29 00:50:23 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:50:23.818265 | orchestrator | 2025-05-29 00:50:23 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:50:23.821216 | orchestrator | 2025-05-29 00:50:23 | INFO  | Task 27dedc27-f6ad-47d3-9548-105ccdb7a61f is in state STARTED 2025-05-29 00:50:23.822701 | orchestrator | 2025-05-29 00:50:23 | INFO  | Task 0c2e12cd-a742-498c-96cf-911eee3b81af is in state STARTED 2025-05-29 00:50:23.822721 | orchestrator | 2025-05-29 00:50:23 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:50:26.874871 | orchestrator | 2025-05-29 00:50:26 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:50:26.876293 | orchestrator | 2025-05-29 00:50:26 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:50:26.877565 | orchestrator | 2025-05-29 00:50:26 | INFO  | Task 27dedc27-f6ad-47d3-9548-105ccdb7a61f is in state STARTED 2025-05-29 00:50:26.879019 | orchestrator | 2025-05-29 00:50:26 | INFO  | Task 0c2e12cd-a742-498c-96cf-911eee3b81af is in state STARTED 2025-05-29 00:50:26.879068 | orchestrator | 2025-05-29 00:50:26 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:50:29.930212 | orchestrator | 2025-05-29 00:50:29 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:50:29.931487 | orchestrator | 2025-05-29 00:50:29 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:50:29.932948 | orchestrator | 2025-05-29 00:50:29 | INFO  | Task 27dedc27-f6ad-47d3-9548-105ccdb7a61f is in state STARTED 2025-05-29 00:50:29.935048 | orchestrator | 2025-05-29 00:50:29 | INFO  | Task 0c2e12cd-a742-498c-96cf-911eee3b81af is in state STARTED 2025-05-29 00:50:29.935097 | orchestrator | 2025-05-29 00:50:29 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:50:32.985042 | orchestrator | 2025-05-29 00:50:32 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:50:32.985182 | orchestrator | 2025-05-29 00:50:32 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:50:32.985987 | orchestrator | 2025-05-29 00:50:32 | INFO  | Task 27dedc27-f6ad-47d3-9548-105ccdb7a61f is in state STARTED 2025-05-29 00:50:32.987440 | orchestrator | 2025-05-29 00:50:32 | INFO  | Task 0c2e12cd-a742-498c-96cf-911eee3b81af is in state STARTED 2025-05-29 00:50:32.987884 | orchestrator | 2025-05-29 00:50:32 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:50:36.040311 | orchestrator | 2025-05-29 00:50:36 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:50:36.040421 | orchestrator | 2025-05-29 00:50:36 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:50:36.041933 | orchestrator | 2025-05-29 00:50:36 | INFO  | Task 27dedc27-f6ad-47d3-9548-105ccdb7a61f is in state STARTED 2025-05-29 00:50:36.042522 | orchestrator | 2025-05-29 00:50:36 | INFO  | Task 0c2e12cd-a742-498c-96cf-911eee3b81af is in state STARTED 2025-05-29 00:50:36.042563 | orchestrator | 2025-05-29 00:50:36 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:50:39.089495 | orchestrator | 2025-05-29 00:50:39 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:50:39.089780 | orchestrator | 2025-05-29 00:50:39 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:50:39.090570 | orchestrator | 2025-05-29 00:50:39 | INFO  | Task 27dedc27-f6ad-47d3-9548-105ccdb7a61f is in state STARTED 2025-05-29 00:50:39.091450 | orchestrator | 2025-05-29 00:50:39 | INFO  | Task 0c2e12cd-a742-498c-96cf-911eee3b81af is in state STARTED 2025-05-29 00:50:39.091473 | orchestrator | 2025-05-29 00:50:39 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:50:42.136291 | orchestrator | 2025-05-29 00:50:42 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:50:42.136698 | orchestrator | 2025-05-29 00:50:42 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:50:42.137995 | orchestrator | 2025-05-29 00:50:42 | INFO  | Task 27dedc27-f6ad-47d3-9548-105ccdb7a61f is in state STARTED 2025-05-29 00:50:42.139150 | orchestrator | 2025-05-29 00:50:42 | INFO  | Task 0c2e12cd-a742-498c-96cf-911eee3b81af is in state STARTED 2025-05-29 00:50:42.139232 | orchestrator | 2025-05-29 00:50:42 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:50:45.184935 | orchestrator | 2025-05-29 00:50:45 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:50:45.185641 | orchestrator | 2025-05-29 00:50:45 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:50:45.185681 | orchestrator | 2025-05-29 00:50:45 | INFO  | Task 27dedc27-f6ad-47d3-9548-105ccdb7a61f is in state STARTED 2025-05-29 00:50:45.186796 | orchestrator | 2025-05-29 00:50:45 | INFO  | Task 0c2e12cd-a742-498c-96cf-911eee3b81af is in state STARTED 2025-05-29 00:50:45.186863 | orchestrator | 2025-05-29 00:50:45 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:50:48.226937 | orchestrator | 2025-05-29 00:50:48 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:50:48.227550 | orchestrator | 2025-05-29 00:50:48 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:50:48.228615 | orchestrator | 2025-05-29 00:50:48 | INFO  | Task 27dedc27-f6ad-47d3-9548-105ccdb7a61f is in state STARTED 2025-05-29 00:50:48.229556 | orchestrator | 2025-05-29 00:50:48 | INFO  | Task 0c2e12cd-a742-498c-96cf-911eee3b81af is in state STARTED 2025-05-29 00:50:48.229607 | orchestrator | 2025-05-29 00:50:48 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:50:51.291353 | orchestrator | 2025-05-29 00:50:51 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:50:51.293489 | orchestrator | 2025-05-29 00:50:51 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:50:51.297538 | orchestrator | 2025-05-29 00:50:51 | INFO  | Task 27dedc27-f6ad-47d3-9548-105ccdb7a61f is in state STARTED 2025-05-29 00:50:51.297617 | orchestrator | 2025-05-29 00:50:51 | INFO  | Task 0c2e12cd-a742-498c-96cf-911eee3b81af is in state STARTED 2025-05-29 00:50:51.297820 | orchestrator | 2025-05-29 00:50:51 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:50:54.355286 | orchestrator | 2025-05-29 00:50:54 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:50:54.356184 | orchestrator | 2025-05-29 00:50:54 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:50:54.357815 | orchestrator | 2025-05-29 00:50:54 | INFO  | Task 27dedc27-f6ad-47d3-9548-105ccdb7a61f is in state STARTED 2025-05-29 00:50:54.358760 | orchestrator | 2025-05-29 00:50:54 | INFO  | Task 0c2e12cd-a742-498c-96cf-911eee3b81af is in state STARTED 2025-05-29 00:50:54.358813 | orchestrator | 2025-05-29 00:50:54 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:50:57.423011 | orchestrator | 2025-05-29 00:50:57 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:50:57.424779 | orchestrator | 2025-05-29 00:50:57 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:50:57.425747 | orchestrator | 2025-05-29 00:50:57 | INFO  | Task 27dedc27-f6ad-47d3-9548-105ccdb7a61f is in state STARTED 2025-05-29 00:50:57.426832 | orchestrator | 2025-05-29 00:50:57 | INFO  | Task 0c2e12cd-a742-498c-96cf-911eee3b81af is in state STARTED 2025-05-29 00:50:57.427419 | orchestrator | 2025-05-29 00:50:57 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:51:00.479029 | orchestrator | 2025-05-29 00:51:00 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:51:00.479331 | orchestrator | 2025-05-29 00:51:00 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:51:00.480858 | orchestrator | 2025-05-29 00:51:00 | INFO  | Task 27dedc27-f6ad-47d3-9548-105ccdb7a61f is in state STARTED 2025-05-29 00:51:00.480900 | orchestrator | 2025-05-29 00:51:00 | INFO  | Task 0c2e12cd-a742-498c-96cf-911eee3b81af is in state STARTED 2025-05-29 00:51:00.480912 | orchestrator | 2025-05-29 00:51:00 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:51:03.529515 | orchestrator | 2025-05-29 00:51:03 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:51:03.532369 | orchestrator | 2025-05-29 00:51:03 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:51:03.535393 | orchestrator | 2025-05-29 00:51:03 | INFO  | Task 27dedc27-f6ad-47d3-9548-105ccdb7a61f is in state STARTED 2025-05-29 00:51:03.537484 | orchestrator | 2025-05-29 00:51:03 | INFO  | Task 0c2e12cd-a742-498c-96cf-911eee3b81af is in state STARTED 2025-05-29 00:51:03.538507 | orchestrator | 2025-05-29 00:51:03 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:51:06.592258 | orchestrator | 2025-05-29 00:51:06 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:51:06.593650 | orchestrator | 2025-05-29 00:51:06 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:51:06.594944 | orchestrator | 2025-05-29 00:51:06 | INFO  | Task 27dedc27-f6ad-47d3-9548-105ccdb7a61f is in state STARTED 2025-05-29 00:51:06.595930 | orchestrator | 2025-05-29 00:51:06 | INFO  | Task 0c2e12cd-a742-498c-96cf-911eee3b81af is in state STARTED 2025-05-29 00:51:06.595953 | orchestrator | 2025-05-29 00:51:06 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:51:09.648072 | orchestrator | 2025-05-29 00:51:09 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:51:09.648194 | orchestrator | 2025-05-29 00:51:09 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:51:09.649701 | orchestrator | 2025-05-29 00:51:09 | INFO  | Task 27dedc27-f6ad-47d3-9548-105ccdb7a61f is in state STARTED 2025-05-29 00:51:09.650389 | orchestrator | 2025-05-29 00:51:09 | INFO  | Task 0c2e12cd-a742-498c-96cf-911eee3b81af is in state STARTED 2025-05-29 00:51:09.650418 | orchestrator | 2025-05-29 00:51:09 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:51:12.684192 | orchestrator | 2025-05-29 00:51:12 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:51:12.685038 | orchestrator | 2025-05-29 00:51:12 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:51:12.686864 | orchestrator | 2025-05-29 00:51:12 | INFO  | Task 27dedc27-f6ad-47d3-9548-105ccdb7a61f is in state STARTED 2025-05-29 00:51:12.688700 | orchestrator | 2025-05-29 00:51:12 | INFO  | Task 0c2e12cd-a742-498c-96cf-911eee3b81af is in state STARTED 2025-05-29 00:51:12.688730 | orchestrator | 2025-05-29 00:51:12 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:51:15.717358 | orchestrator | 2025-05-29 00:51:15 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:51:15.717642 | orchestrator | 2025-05-29 00:51:15 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:51:15.718149 | orchestrator | 2025-05-29 00:51:15 | INFO  | Task 27dedc27-f6ad-47d3-9548-105ccdb7a61f is in state STARTED 2025-05-29 00:51:15.719053 | orchestrator | 2025-05-29 00:51:15 | INFO  | Task 0c2e12cd-a742-498c-96cf-911eee3b81af is in state STARTED 2025-05-29 00:51:15.719076 | orchestrator | 2025-05-29 00:51:15 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:51:18.772352 | orchestrator | 2025-05-29 00:51:18 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:51:18.775584 | orchestrator | 2025-05-29 00:51:18 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:51:18.777742 | orchestrator | 2025-05-29 00:51:18 | INFO  | Task 27dedc27-f6ad-47d3-9548-105ccdb7a61f is in state STARTED 2025-05-29 00:51:18.778836 | orchestrator | 2025-05-29 00:51:18 | INFO  | Task 0c2e12cd-a742-498c-96cf-911eee3b81af is in state SUCCESS 2025-05-29 00:51:18.780444 | orchestrator | 2025-05-29 00:51:18.780500 | orchestrator | 2025-05-29 00:51:18.780521 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-29 00:51:18.780613 | orchestrator | 2025-05-29 00:51:18.780685 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-29 00:51:18.780707 | orchestrator | Thursday 29 May 2025 00:48:54 +0000 (0:00:00.296) 0:00:00.296 ********** 2025-05-29 00:51:18.780728 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:51:18.780748 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:51:18.780913 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:51:18.780935 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:51:18.780956 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:51:18.780976 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:51:18.780996 | orchestrator | 2025-05-29 00:51:18.781016 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-29 00:51:18.781035 | orchestrator | Thursday 29 May 2025 00:48:54 +0000 (0:00:00.694) 0:00:00.990 ********** 2025-05-29 00:51:18.781054 | orchestrator | ok: [testbed-node-3] => (item=enable_ovn_True) 2025-05-29 00:51:18.781074 | orchestrator | ok: [testbed-node-4] => (item=enable_ovn_True) 2025-05-29 00:51:18.781091 | orchestrator | ok: [testbed-node-5] => (item=enable_ovn_True) 2025-05-29 00:51:18.781108 | orchestrator | ok: [testbed-node-0] => (item=enable_ovn_True) 2025-05-29 00:51:18.781127 | orchestrator | ok: [testbed-node-1] => (item=enable_ovn_True) 2025-05-29 00:51:18.781145 | orchestrator | ok: [testbed-node-2] => (item=enable_ovn_True) 2025-05-29 00:51:18.781164 | orchestrator | 2025-05-29 00:51:18.781183 | orchestrator | PLAY [Apply role ovn-controller] *********************************************** 2025-05-29 00:51:18.781202 | orchestrator | 2025-05-29 00:51:18.781222 | orchestrator | TASK [ovn-controller : include_tasks] ****************************************** 2025-05-29 00:51:18.781233 | orchestrator | Thursday 29 May 2025 00:48:56 +0000 (0:00:01.550) 0:00:02.541 ********** 2025-05-29 00:51:18.781245 | orchestrator | included: /ansible/roles/ovn-controller/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-29 00:51:18.781257 | orchestrator | 2025-05-29 00:51:18.781268 | orchestrator | TASK [ovn-controller : Ensuring config directories exist] ********************** 2025-05-29 00:51:18.781279 | orchestrator | Thursday 29 May 2025 00:48:57 +0000 (0:00:01.344) 0:00:03.885 ********** 2025-05-29 00:51:18.781292 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-29 00:51:18.781306 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-29 00:51:18.781331 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-29 00:51:18.781343 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-29 00:51:18.781367 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-29 00:51:18.781394 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-29 00:51:18.781406 | orchestrator | 2025-05-29 00:51:18.781418 | orchestrator | TASK [ovn-controller : Copying over config.json files for services] ************ 2025-05-29 00:51:18.781429 | orchestrator | Thursday 29 May 2025 00:48:59 +0000 (0:00:01.291) 0:00:05.177 ********** 2025-05-29 00:51:18.781440 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-29 00:51:18.781451 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-29 00:51:18.781463 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-29 00:51:18.781474 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-29 00:51:18.781486 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-29 00:51:18.781502 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-29 00:51:18.781520 | orchestrator | 2025-05-29 00:51:18.781556 | orchestrator | TASK [ovn-controller : Ensuring systemd override directory exists] ************* 2025-05-29 00:51:18.781578 | orchestrator | Thursday 29 May 2025 00:49:01 +0000 (0:00:01.967) 0:00:07.145 ********** 2025-05-29 00:51:18.781599 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-29 00:51:18.781619 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-29 00:51:18.781651 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-29 00:51:18.781664 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-29 00:51:18.781676 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-29 00:51:18.781687 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-29 00:51:18.781698 | orchestrator | 2025-05-29 00:51:18.781709 | orchestrator | TASK [ovn-controller : Copying over systemd override] ************************** 2025-05-29 00:51:18.781720 | orchestrator | Thursday 29 May 2025 00:49:02 +0000 (0:00:01.879) 0:00:09.024 ********** 2025-05-29 00:51:18.781732 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-29 00:51:18.781749 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-29 00:51:18.781767 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-29 00:51:18.781779 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-29 00:51:18.781790 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-29 00:51:18.781808 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-29 00:51:18.781820 | orchestrator | 2025-05-29 00:51:18.781831 | orchestrator | TASK [ovn-controller : Check ovn-controller containers] ************************ 2025-05-29 00:51:18.781842 | orchestrator | Thursday 29 May 2025 00:49:05 +0000 (0:00:02.717) 0:00:11.742 ********** 2025-05-29 00:51:18.781853 | orchestrator | changed: [testbed-node-4] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-29 00:51:18.781864 | orchestrator | changed: [testbed-node-5] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-29 00:51:18.781875 | orchestrator | changed: [testbed-node-3] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-29 00:51:18.781886 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-29 00:51:18.781907 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-29 00:51:18.781919 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-controller', 'value': {'container_name': 'ovn_controller', 'group': 'ovn-controller', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-controller:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-controller/:/var/lib/kolla/config_files/:ro', '/run/openvswitch:/run/openvswitch:shared', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-29 00:51:18.781930 | orchestrator | 2025-05-29 00:51:18.781941 | orchestrator | TASK [ovn-controller : Create br-int bridge on OpenvSwitch] ******************** 2025-05-29 00:51:18.781952 | orchestrator | Thursday 29 May 2025 00:49:07 +0000 (0:00:01.359) 0:00:13.102 ********** 2025-05-29 00:51:18.781963 | orchestrator | changed: [testbed-node-1] 2025-05-29 00:51:18.781975 | orchestrator | changed: [testbed-node-2] 2025-05-29 00:51:18.781986 | orchestrator | changed: [testbed-node-5] 2025-05-29 00:51:18.781996 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:51:18.782007 | orchestrator | changed: [testbed-node-4] 2025-05-29 00:51:18.782064 | orchestrator | changed: [testbed-node-3] 2025-05-29 00:51:18.782077 | orchestrator | 2025-05-29 00:51:18.782088 | orchestrator | TASK [ovn-controller : Configure OVN in OVSDB] ********************************* 2025-05-29 00:51:18.782099 | orchestrator | Thursday 29 May 2025 00:49:10 +0000 (0:00:03.178) 0:00:16.280 ********** 2025-05-29 00:51:18.782110 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.13'}) 2025-05-29 00:51:18.782120 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.15'}) 2025-05-29 00:51:18.782131 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.14'}) 2025-05-29 00:51:18.782148 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.10'}) 2025-05-29 00:51:18.782159 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.11'}) 2025-05-29 00:51:18.782170 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-ip', 'value': '192.168.16.12'}) 2025-05-29 00:51:18.782181 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-05-29 00:51:18.782191 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-05-29 00:51:18.782202 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-05-29 00:51:18.782213 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-05-29 00:51:18.782223 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-05-29 00:51:18.782234 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-encap-type', 'value': 'geneve'}) 2025-05-29 00:51:18.782245 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-05-29 00:51:18.782257 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-05-29 00:51:18.782268 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-05-29 00:51:18.782279 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-05-29 00:51:18.782297 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-05-29 00:51:18.782308 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote', 'value': 'tcp:192.168.16.10:6642,tcp:192.168.16.11:6642,tcp:192.168.16.12:6642'}) 2025-05-29 00:51:18.782319 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-05-29 00:51:18.782330 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-05-29 00:51:18.782342 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-05-29 00:51:18.782352 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-05-29 00:51:18.782363 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-05-29 00:51:18.782374 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-remote-probe-interval', 'value': '60000'}) 2025-05-29 00:51:18.782384 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-05-29 00:51:18.782395 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-05-29 00:51:18.782411 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-05-29 00:51:18.782422 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-05-29 00:51:18.782432 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-05-29 00:51:18.782443 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-openflow-probe-interval', 'value': '60'}) 2025-05-29 00:51:18.782454 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-05-29 00:51:18.782464 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-05-29 00:51:18.782475 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-05-29 00:51:18.782486 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-05-29 00:51:18.782497 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-05-29 00:51:18.782508 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-monitor-all', 'value': False}) 2025-05-29 00:51:18.782518 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-05-29 00:51:18.782529 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-05-29 00:51:18.782565 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'absent'}) 2025-05-29 00:51:18.782577 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-05-29 00:51:18.782593 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-05-29 00:51:18.782604 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-bridge-mappings', 'value': 'physnet1:br-ex', 'state': 'present'}) 2025-05-29 00:51:18.782615 | orchestrator | changed: [testbed-node-5] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:71:3a:c3', 'state': 'present'}) 2025-05-29 00:51:18.782626 | orchestrator | changed: [testbed-node-4] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:2f:fa:44', 'state': 'present'}) 2025-05-29 00:51:18.782637 | orchestrator | changed: [testbed-node-3] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:89:18:56', 'state': 'present'}) 2025-05-29 00:51:18.782654 | orchestrator | ok: [testbed-node-0] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:52:c1:40', 'state': 'absent'}) 2025-05-29 00:51:18.782666 | orchestrator | ok: [testbed-node-1] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:33:12:50', 'state': 'absent'}) 2025-05-29 00:51:18.782676 | orchestrator | ok: [testbed-node-2] => (item={'name': 'ovn-chassis-mac-mappings', 'value': 'physnet1:52:54:00:29:4a:9b', 'state': 'absent'}) 2025-05-29 00:51:18.782687 | orchestrator | ok: [testbed-node-5] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-05-29 00:51:18.782698 | orchestrator | ok: [testbed-node-4] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-05-29 00:51:18.782708 | orchestrator | ok: [testbed-node-3] => (item={'name': 'ovn-cms-options', 'value': '', 'state': 'absent'}) 2025-05-29 00:51:18.782723 | orchestrator | changed: [testbed-node-0] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-05-29 00:51:18.782742 | orchestrator | changed: [testbed-node-1] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-05-29 00:51:18.782760 | orchestrator | changed: [testbed-node-2] => (item={'name': 'ovn-cms-options', 'value': 'enable-chassis-as-gw,availability-zones=nova', 'state': 'present'}) 2025-05-29 00:51:18.782778 | orchestrator | 2025-05-29 00:51:18.782796 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-05-29 00:51:18.782814 | orchestrator | Thursday 29 May 2025 00:49:28 +0000 (0:00:18.457) 0:00:34.738 ********** 2025-05-29 00:51:18.782832 | orchestrator | 2025-05-29 00:51:18.782852 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-05-29 00:51:18.782872 | orchestrator | Thursday 29 May 2025 00:49:28 +0000 (0:00:00.059) 0:00:34.797 ********** 2025-05-29 00:51:18.782890 | orchestrator | 2025-05-29 00:51:18.782909 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-05-29 00:51:18.782921 | orchestrator | Thursday 29 May 2025 00:49:28 +0000 (0:00:00.055) 0:00:34.853 ********** 2025-05-29 00:51:18.782931 | orchestrator | 2025-05-29 00:51:18.782942 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-05-29 00:51:18.782953 | orchestrator | Thursday 29 May 2025 00:49:29 +0000 (0:00:00.268) 0:00:35.122 ********** 2025-05-29 00:51:18.782963 | orchestrator | 2025-05-29 00:51:18.782974 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-05-29 00:51:18.782985 | orchestrator | Thursday 29 May 2025 00:49:29 +0000 (0:00:00.054) 0:00:35.177 ********** 2025-05-29 00:51:18.782996 | orchestrator | 2025-05-29 00:51:18.783013 | orchestrator | TASK [ovn-controller : Flush handlers] ***************************************** 2025-05-29 00:51:18.783024 | orchestrator | Thursday 29 May 2025 00:49:29 +0000 (0:00:00.055) 0:00:35.232 ********** 2025-05-29 00:51:18.783035 | orchestrator | 2025-05-29 00:51:18.783045 | orchestrator | RUNNING HANDLER [ovn-controller : Reload systemd config] *********************** 2025-05-29 00:51:18.783056 | orchestrator | Thursday 29 May 2025 00:49:29 +0000 (0:00:00.054) 0:00:35.286 ********** 2025-05-29 00:51:18.783067 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:51:18.783078 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:51:18.783089 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:51:18.783100 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:51:18.783111 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:51:18.783122 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:51:18.783132 | orchestrator | 2025-05-29 00:51:18.783143 | orchestrator | RUNNING HANDLER [ovn-controller : Restart ovn-controller container] ************ 2025-05-29 00:51:18.783154 | orchestrator | Thursday 29 May 2025 00:49:31 +0000 (0:00:02.168) 0:00:37.455 ********** 2025-05-29 00:51:18.783165 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:51:18.783176 | orchestrator | changed: [testbed-node-4] 2025-05-29 00:51:18.783187 | orchestrator | changed: [testbed-node-2] 2025-05-29 00:51:18.783198 | orchestrator | changed: [testbed-node-3] 2025-05-29 00:51:18.783216 | orchestrator | changed: [testbed-node-5] 2025-05-29 00:51:18.783227 | orchestrator | changed: [testbed-node-1] 2025-05-29 00:51:18.783238 | orchestrator | 2025-05-29 00:51:18.783249 | orchestrator | PLAY [Apply role ovn-db] ******************************************************* 2025-05-29 00:51:18.783260 | orchestrator | 2025-05-29 00:51:18.783271 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-05-29 00:51:18.783281 | orchestrator | Thursday 29 May 2025 00:49:52 +0000 (0:00:21.474) 0:00:58.930 ********** 2025-05-29 00:51:18.783292 | orchestrator | included: /ansible/roles/ovn-db/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-29 00:51:18.783303 | orchestrator | 2025-05-29 00:51:18.783314 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-05-29 00:51:18.783325 | orchestrator | Thursday 29 May 2025 00:49:53 +0000 (0:00:00.807) 0:00:59.737 ********** 2025-05-29 00:51:18.783335 | orchestrator | included: /ansible/roles/ovn-db/tasks/lookup_cluster.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-29 00:51:18.783346 | orchestrator | 2025-05-29 00:51:18.783365 | orchestrator | TASK [ovn-db : Checking for any existing OVN DB container volumes] ************* 2025-05-29 00:51:18.783376 | orchestrator | Thursday 29 May 2025 00:49:54 +0000 (0:00:00.887) 0:01:00.625 ********** 2025-05-29 00:51:18.783387 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:51:18.783398 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:51:18.783409 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:51:18.783420 | orchestrator | 2025-05-29 00:51:18.783431 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB volume availability] *************** 2025-05-29 00:51:18.783442 | orchestrator | Thursday 29 May 2025 00:49:55 +0000 (0:00:01.257) 0:01:01.883 ********** 2025-05-29 00:51:18.783452 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:51:18.783464 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:51:18.783474 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:51:18.783485 | orchestrator | 2025-05-29 00:51:18.783496 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB volume availability] *************** 2025-05-29 00:51:18.783507 | orchestrator | Thursday 29 May 2025 00:49:56 +0000 (0:00:00.540) 0:01:02.423 ********** 2025-05-29 00:51:18.783517 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:51:18.783528 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:51:18.783562 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:51:18.783573 | orchestrator | 2025-05-29 00:51:18.783584 | orchestrator | TASK [ovn-db : Establish whether the OVN NB cluster has already existed] ******* 2025-05-29 00:51:18.783595 | orchestrator | Thursday 29 May 2025 00:49:56 +0000 (0:00:00.480) 0:01:02.904 ********** 2025-05-29 00:51:18.783606 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:51:18.783617 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:51:18.783628 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:51:18.783639 | orchestrator | 2025-05-29 00:51:18.783650 | orchestrator | TASK [ovn-db : Establish whether the OVN SB cluster has already existed] ******* 2025-05-29 00:51:18.783660 | orchestrator | Thursday 29 May 2025 00:49:57 +0000 (0:00:00.558) 0:01:03.462 ********** 2025-05-29 00:51:18.783671 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:51:18.783682 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:51:18.783693 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:51:18.783704 | orchestrator | 2025-05-29 00:51:18.783715 | orchestrator | TASK [ovn-db : Check if running on all OVN NB DB hosts] ************************ 2025-05-29 00:51:18.783726 | orchestrator | Thursday 29 May 2025 00:49:57 +0000 (0:00:00.426) 0:01:03.889 ********** 2025-05-29 00:51:18.783737 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:51:18.783748 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:51:18.783758 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:51:18.783769 | orchestrator | 2025-05-29 00:51:18.783780 | orchestrator | TASK [ovn-db : Check OVN NB service port liveness] ***************************** 2025-05-29 00:51:18.783791 | orchestrator | Thursday 29 May 2025 00:49:58 +0000 (0:00:00.678) 0:01:04.567 ********** 2025-05-29 00:51:18.783802 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:51:18.783812 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:51:18.783836 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:51:18.783847 | orchestrator | 2025-05-29 00:51:18.783858 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB service port liveness] ************* 2025-05-29 00:51:18.783869 | orchestrator | Thursday 29 May 2025 00:49:58 +0000 (0:00:00.425) 0:01:04.993 ********** 2025-05-29 00:51:18.783880 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:51:18.783891 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:51:18.783902 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:51:18.783912 | orchestrator | 2025-05-29 00:51:18.783923 | orchestrator | TASK [ovn-db : Get OVN NB database information] ******************************** 2025-05-29 00:51:18.783934 | orchestrator | Thursday 29 May 2025 00:49:59 +0000 (0:00:00.348) 0:01:05.342 ********** 2025-05-29 00:51:18.783945 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:51:18.783956 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:51:18.783967 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:51:18.783977 | orchestrator | 2025-05-29 00:51:18.783988 | orchestrator | TASK [ovn-db : Divide hosts by their OVN NB leader/follower role] ************** 2025-05-29 00:51:18.783999 | orchestrator | Thursday 29 May 2025 00:49:59 +0000 (0:00:00.219) 0:01:05.561 ********** 2025-05-29 00:51:18.784015 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:51:18.784026 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:51:18.784037 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:51:18.784048 | orchestrator | 2025-05-29 00:51:18.784058 | orchestrator | TASK [ovn-db : Fail on existing OVN NB cluster with no leader] ***************** 2025-05-29 00:51:18.784069 | orchestrator | Thursday 29 May 2025 00:49:59 +0000 (0:00:00.315) 0:01:05.877 ********** 2025-05-29 00:51:18.784080 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:51:18.784091 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:51:18.784102 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:51:18.784113 | orchestrator | 2025-05-29 00:51:18.784124 | orchestrator | TASK [ovn-db : Check if running on all OVN SB DB hosts] ************************ 2025-05-29 00:51:18.784135 | orchestrator | Thursday 29 May 2025 00:50:00 +0000 (0:00:00.374) 0:01:06.251 ********** 2025-05-29 00:51:18.784146 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:51:18.784157 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:51:18.784168 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:51:18.784178 | orchestrator | 2025-05-29 00:51:18.784189 | orchestrator | TASK [ovn-db : Check OVN SB service port liveness] ***************************** 2025-05-29 00:51:18.784200 | orchestrator | Thursday 29 May 2025 00:50:00 +0000 (0:00:00.420) 0:01:06.671 ********** 2025-05-29 00:51:18.784211 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:51:18.784222 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:51:18.784233 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:51:18.784243 | orchestrator | 2025-05-29 00:51:18.784254 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB service port liveness] ************* 2025-05-29 00:51:18.784265 | orchestrator | Thursday 29 May 2025 00:50:00 +0000 (0:00:00.204) 0:01:06.876 ********** 2025-05-29 00:51:18.784276 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:51:18.784286 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:51:18.784297 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:51:18.784308 | orchestrator | 2025-05-29 00:51:18.784319 | orchestrator | TASK [ovn-db : Get OVN SB database information] ******************************** 2025-05-29 00:51:18.784330 | orchestrator | Thursday 29 May 2025 00:50:01 +0000 (0:00:00.325) 0:01:07.202 ********** 2025-05-29 00:51:18.784341 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:51:18.784352 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:51:18.784362 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:51:18.784373 | orchestrator | 2025-05-29 00:51:18.784389 | orchestrator | TASK [ovn-db : Divide hosts by their OVN SB leader/follower role] ************** 2025-05-29 00:51:18.784401 | orchestrator | Thursday 29 May 2025 00:50:01 +0000 (0:00:00.307) 0:01:07.509 ********** 2025-05-29 00:51:18.784411 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:51:18.784422 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:51:18.784433 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:51:18.784450 | orchestrator | 2025-05-29 00:51:18.784461 | orchestrator | TASK [ovn-db : Fail on existing OVN SB cluster with no leader] ***************** 2025-05-29 00:51:18.784471 | orchestrator | Thursday 29 May 2025 00:50:01 +0000 (0:00:00.223) 0:01:07.732 ********** 2025-05-29 00:51:18.784482 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:51:18.784493 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:51:18.784504 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:51:18.784514 | orchestrator | 2025-05-29 00:51:18.784525 | orchestrator | TASK [ovn-db : include_tasks] ************************************************** 2025-05-29 00:51:18.784551 | orchestrator | Thursday 29 May 2025 00:50:02 +0000 (0:00:00.322) 0:01:08.055 ********** 2025-05-29 00:51:18.784562 | orchestrator | included: /ansible/roles/ovn-db/tasks/bootstrap-initial.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-29 00:51:18.784573 | orchestrator | 2025-05-29 00:51:18.784584 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new cluster)] ******************* 2025-05-29 00:51:18.784594 | orchestrator | Thursday 29 May 2025 00:50:02 +0000 (0:00:00.554) 0:01:08.610 ********** 2025-05-29 00:51:18.784605 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:51:18.784616 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:51:18.784627 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:51:18.784638 | orchestrator | 2025-05-29 00:51:18.784648 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new cluster)] ******************* 2025-05-29 00:51:18.784659 | orchestrator | Thursday 29 May 2025 00:50:02 +0000 (0:00:00.360) 0:01:08.971 ********** 2025-05-29 00:51:18.784670 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:51:18.784687 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:51:18.784706 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:51:18.784724 | orchestrator | 2025-05-29 00:51:18.784742 | orchestrator | TASK [ovn-db : Check NB cluster status] **************************************** 2025-05-29 00:51:18.784761 | orchestrator | Thursday 29 May 2025 00:50:03 +0000 (0:00:00.522) 0:01:09.493 ********** 2025-05-29 00:51:18.784782 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:51:18.784802 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:51:18.784822 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:51:18.784834 | orchestrator | 2025-05-29 00:51:18.784845 | orchestrator | TASK [ovn-db : Check SB cluster status] **************************************** 2025-05-29 00:51:18.784855 | orchestrator | Thursday 29 May 2025 00:50:03 +0000 (0:00:00.419) 0:01:09.912 ********** 2025-05-29 00:51:18.784866 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:51:18.784877 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:51:18.784887 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:51:18.784898 | orchestrator | 2025-05-29 00:51:18.784909 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in NB DB] *** 2025-05-29 00:51:18.784919 | orchestrator | Thursday 29 May 2025 00:50:04 +0000 (0:00:00.351) 0:01:10.264 ********** 2025-05-29 00:51:18.784930 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:51:18.784941 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:51:18.784952 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:51:18.784962 | orchestrator | 2025-05-29 00:51:18.784973 | orchestrator | TASK [ovn-db : Remove an old node with the same ip address as the new node in SB DB] *** 2025-05-29 00:51:18.784984 | orchestrator | Thursday 29 May 2025 00:50:04 +0000 (0:00:00.337) 0:01:10.604 ********** 2025-05-29 00:51:18.784995 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:51:18.785005 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:51:18.785016 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:51:18.785027 | orchestrator | 2025-05-29 00:51:18.785037 | orchestrator | TASK [ovn-db : Set bootstrap args fact for NB (new member)] ******************** 2025-05-29 00:51:18.785054 | orchestrator | Thursday 29 May 2025 00:50:05 +0000 (0:00:00.453) 0:01:11.058 ********** 2025-05-29 00:51:18.785065 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:51:18.785076 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:51:18.785086 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:51:18.785097 | orchestrator | 2025-05-29 00:51:18.785107 | orchestrator | TASK [ovn-db : Set bootstrap args fact for SB (new member)] ******************** 2025-05-29 00:51:18.785125 | orchestrator | Thursday 29 May 2025 00:50:05 +0000 (0:00:00.394) 0:01:11.452 ********** 2025-05-29 00:51:18.785136 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:51:18.785147 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:51:18.785157 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:51:18.785168 | orchestrator | 2025-05-29 00:51:18.785179 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-05-29 00:51:18.785190 | orchestrator | Thursday 29 May 2025 00:50:05 +0000 (0:00:00.319) 0:01:11.771 ********** 2025-05-29 00:51:18.785201 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-29 00:51:18.785214 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-29 00:51:18.785232 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-29 00:51:18.785245 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-29 00:51:18.785258 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-29 00:51:18.785270 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-29 00:51:18.785281 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-29 00:51:18.785292 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-29 00:51:18.785310 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-29 00:51:18.785322 | orchestrator | 2025-05-29 00:51:18.785333 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-05-29 00:51:18.785344 | orchestrator | Thursday 29 May 2025 00:50:07 +0000 (0:00:01.499) 0:01:13.271 ********** 2025-05-29 00:51:18.785355 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-29 00:51:18.785366 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-29 00:51:18.785377 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-29 00:51:18.785461 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-29 00:51:18.785483 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-29 00:51:18.785495 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-29 00:51:18.785506 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-29 00:51:18.785517 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-29 00:51:18.785580 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-29 00:51:18.785601 | orchestrator | 2025-05-29 00:51:18.785628 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-05-29 00:51:18.785648 | orchestrator | Thursday 29 May 2025 00:50:11 +0000 (0:00:04.714) 0:01:17.985 ********** 2025-05-29 00:51:18.785667 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-29 00:51:18.785679 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-29 00:51:18.785690 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-29 00:51:18.785718 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-29 00:51:18.785731 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-29 00:51:18.785742 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-29 00:51:18.785753 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-29 00:51:18.785764 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-29 00:51:18.785782 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-29 00:51:18.785793 | orchestrator | 2025-05-29 00:51:18.785804 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-05-29 00:51:18.785815 | orchestrator | Thursday 29 May 2025 00:50:14 +0000 (0:00:02.573) 0:01:20.559 ********** 2025-05-29 00:51:18.785826 | orchestrator | 2025-05-29 00:51:18.785837 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-05-29 00:51:18.785852 | orchestrator | Thursday 29 May 2025 00:50:14 +0000 (0:00:00.058) 0:01:20.618 ********** 2025-05-29 00:51:18.785863 | orchestrator | 2025-05-29 00:51:18.785874 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-05-29 00:51:18.785885 | orchestrator | Thursday 29 May 2025 00:50:14 +0000 (0:00:00.054) 0:01:20.672 ********** 2025-05-29 00:51:18.785896 | orchestrator | 2025-05-29 00:51:18.785906 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-05-29 00:51:18.785917 | orchestrator | Thursday 29 May 2025 00:50:14 +0000 (0:00:00.054) 0:01:20.726 ********** 2025-05-29 00:51:18.785928 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:51:18.785939 | orchestrator | changed: [testbed-node-2] 2025-05-29 00:51:18.785949 | orchestrator | changed: [testbed-node-1] 2025-05-29 00:51:18.785960 | orchestrator | 2025-05-29 00:51:18.785971 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-05-29 00:51:18.785981 | orchestrator | Thursday 29 May 2025 00:50:22 +0000 (0:00:07.970) 0:01:28.696 ********** 2025-05-29 00:51:18.785992 | orchestrator | changed: [testbed-node-1] 2025-05-29 00:51:18.786003 | orchestrator | changed: [testbed-node-2] 2025-05-29 00:51:18.786013 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:51:18.786073 | orchestrator | 2025-05-29 00:51:18.786085 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-05-29 00:51:18.786096 | orchestrator | Thursday 29 May 2025 00:50:29 +0000 (0:00:06.685) 0:01:35.381 ********** 2025-05-29 00:51:18.786106 | orchestrator | changed: [testbed-node-1] 2025-05-29 00:51:18.786117 | orchestrator | changed: [testbed-node-2] 2025-05-29 00:51:18.786128 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:51:18.786139 | orchestrator | 2025-05-29 00:51:18.786150 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-05-29 00:51:18.786160 | orchestrator | Thursday 29 May 2025 00:50:36 +0000 (0:00:06.837) 0:01:42.219 ********** 2025-05-29 00:51:18.786178 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:51:18.786196 | orchestrator | 2025-05-29 00:51:18.786207 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-05-29 00:51:18.786218 | orchestrator | Thursday 29 May 2025 00:50:36 +0000 (0:00:00.122) 0:01:42.341 ********** 2025-05-29 00:51:18.786229 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:51:18.786240 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:51:18.786251 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:51:18.786262 | orchestrator | 2025-05-29 00:51:18.786280 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-05-29 00:51:18.786291 | orchestrator | Thursday 29 May 2025 00:50:37 +0000 (0:00:00.874) 0:01:43.216 ********** 2025-05-29 00:51:18.786302 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:51:18.786313 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:51:18.786323 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:51:18.786334 | orchestrator | 2025-05-29 00:51:18.786345 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-05-29 00:51:18.786363 | orchestrator | Thursday 29 May 2025 00:50:37 +0000 (0:00:00.627) 0:01:43.843 ********** 2025-05-29 00:51:18.786374 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:51:18.786384 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:51:18.786395 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:51:18.786406 | orchestrator | 2025-05-29 00:51:18.786416 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-05-29 00:51:18.786427 | orchestrator | Thursday 29 May 2025 00:50:38 +0000 (0:00:00.891) 0:01:44.734 ********** 2025-05-29 00:51:18.786438 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:51:18.786448 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:51:18.786459 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:51:18.786469 | orchestrator | 2025-05-29 00:51:18.786480 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-05-29 00:51:18.786491 | orchestrator | Thursday 29 May 2025 00:50:39 +0000 (0:00:00.650) 0:01:45.385 ********** 2025-05-29 00:51:18.786502 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:51:18.786512 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:51:18.786523 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:51:18.786558 | orchestrator | 2025-05-29 00:51:18.786576 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-05-29 00:51:18.786587 | orchestrator | Thursday 29 May 2025 00:50:40 +0000 (0:00:01.002) 0:01:46.388 ********** 2025-05-29 00:51:18.786598 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:51:18.786609 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:51:18.786620 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:51:18.786631 | orchestrator | 2025-05-29 00:51:18.786642 | orchestrator | TASK [ovn-db : Unset bootstrap args fact] ************************************** 2025-05-29 00:51:18.786653 | orchestrator | Thursday 29 May 2025 00:50:41 +0000 (0:00:00.750) 0:01:47.139 ********** 2025-05-29 00:51:18.786664 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:51:18.786675 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:51:18.786685 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:51:18.786696 | orchestrator | 2025-05-29 00:51:18.786707 | orchestrator | TASK [ovn-db : Ensuring config directories exist] ****************************** 2025-05-29 00:51:18.786720 | orchestrator | Thursday 29 May 2025 00:50:41 +0000 (0:00:00.459) 0:01:47.598 ********** 2025-05-29 00:51:18.786740 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-29 00:51:18.786759 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-29 00:51:18.786785 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-29 00:51:18.786808 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-29 00:51:18.786828 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-29 00:51:18.786856 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-29 00:51:18.786886 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-29 00:51:18.786905 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-29 00:51:18.786922 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-29 00:51:18.786941 | orchestrator | 2025-05-29 00:51:18.786962 | orchestrator | TASK [ovn-db : Copying over config.json files for services] ******************** 2025-05-29 00:51:18.786981 | orchestrator | Thursday 29 May 2025 00:50:43 +0000 (0:00:01.662) 0:01:49.261 ********** 2025-05-29 00:51:18.786996 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-29 00:51:18.787008 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-29 00:51:18.787019 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-29 00:51:18.787036 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-29 00:51:18.787048 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-29 00:51:18.787068 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-29 00:51:18.787096 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-29 00:51:18.787109 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-29 00:51:18.787120 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-29 00:51:18.787131 | orchestrator | 2025-05-29 00:51:18.787142 | orchestrator | TASK [ovn-db : Check ovn containers] ******************************************* 2025-05-29 00:51:18.787153 | orchestrator | Thursday 29 May 2025 00:50:47 +0000 (0:00:04.404) 0:01:53.665 ********** 2025-05-29 00:51:18.787164 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-29 00:51:18.787175 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-29 00:51:18.787187 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-northd', 'value': {'container_name': 'ovn_northd', 'group': 'ovn-northd', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-northd:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-northd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-29 00:51:18.787202 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-29 00:51:18.787220 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-29 00:51:18.787232 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-nb-db', 'value': {'container_name': 'ovn_nb_db', 'group': 'ovn-nb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-nb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-nb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_nb_db:/var/lib/openvswitch/ovn-nb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-29 00:51:18.787243 | orchestrator | ok: [testbed-node-1] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-29 00:51:18.787260 | orchestrator | ok: [testbed-node-2] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-29 00:51:18.787271 | orchestrator | ok: [testbed-node-0] => (item={'key': 'ovn-sb-db', 'value': {'container_name': 'ovn_sb_db', 'group': 'ovn-sb-db', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/ovn-sb-db-server:24.3.4.20241206', 'volumes': ['/etc/kolla/ovn-sb-db/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'ovn_sb_db:/var/lib/openvswitch/ovn-sb/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-29 00:51:18.787282 | orchestrator | 2025-05-29 00:51:18.787294 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-05-29 00:51:18.787305 | orchestrator | Thursday 29 May 2025 00:50:50 +0000 (0:00:03.312) 0:01:56.977 ********** 2025-05-29 00:51:18.787316 | orchestrator | 2025-05-29 00:51:18.787327 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-05-29 00:51:18.787338 | orchestrator | Thursday 29 May 2025 00:50:51 +0000 (0:00:00.073) 0:01:57.051 ********** 2025-05-29 00:51:18.787349 | orchestrator | 2025-05-29 00:51:18.787359 | orchestrator | TASK [ovn-db : Flush handlers] ************************************************* 2025-05-29 00:51:18.787370 | orchestrator | Thursday 29 May 2025 00:50:51 +0000 (0:00:00.208) 0:01:57.259 ********** 2025-05-29 00:51:18.787381 | orchestrator | 2025-05-29 00:51:18.787392 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-nb-db container] ************************* 2025-05-29 00:51:18.787402 | orchestrator | Thursday 29 May 2025 00:50:51 +0000 (0:00:00.061) 0:01:57.321 ********** 2025-05-29 00:51:18.787413 | orchestrator | changed: [testbed-node-1] 2025-05-29 00:51:18.787424 | orchestrator | changed: [testbed-node-2] 2025-05-29 00:51:18.787435 | orchestrator | 2025-05-29 00:51:18.787446 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-sb-db container] ************************* 2025-05-29 00:51:18.787456 | orchestrator | Thursday 29 May 2025 00:50:57 +0000 (0:00:06.311) 0:02:03.633 ********** 2025-05-29 00:51:18.787467 | orchestrator | changed: [testbed-node-1] 2025-05-29 00:51:18.787478 | orchestrator | changed: [testbed-node-2] 2025-05-29 00:51:18.787489 | orchestrator | 2025-05-29 00:51:18.787499 | orchestrator | RUNNING HANDLER [ovn-db : Restart ovn-northd container] ************************ 2025-05-29 00:51:18.787510 | orchestrator | Thursday 29 May 2025 00:51:04 +0000 (0:00:06.714) 0:02:10.348 ********** 2025-05-29 00:51:18.787521 | orchestrator | changed: [testbed-node-1] 2025-05-29 00:51:18.787576 | orchestrator | changed: [testbed-node-2] 2025-05-29 00:51:18.787597 | orchestrator | 2025-05-29 00:51:18.787608 | orchestrator | TASK [ovn-db : Wait for leader election] *************************************** 2025-05-29 00:51:18.787618 | orchestrator | Thursday 29 May 2025 00:51:10 +0000 (0:00:06.260) 0:02:16.608 ********** 2025-05-29 00:51:18.787629 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:51:18.787640 | orchestrator | 2025-05-29 00:51:18.787651 | orchestrator | TASK [ovn-db : Get OVN_Northbound cluster leader] ****************************** 2025-05-29 00:51:18.787662 | orchestrator | Thursday 29 May 2025 00:51:10 +0000 (0:00:00.207) 0:02:16.815 ********** 2025-05-29 00:51:18.787673 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:51:18.787684 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:51:18.787695 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:51:18.787705 | orchestrator | 2025-05-29 00:51:18.787716 | orchestrator | TASK [ovn-db : Configure OVN NB connection settings] *************************** 2025-05-29 00:51:18.787727 | orchestrator | Thursday 29 May 2025 00:51:11 +0000 (0:00:00.751) 0:02:17.567 ********** 2025-05-29 00:51:18.787738 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:51:18.787749 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:51:18.787764 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:51:18.787775 | orchestrator | 2025-05-29 00:51:18.787786 | orchestrator | TASK [ovn-db : Get OVN_Southbound cluster leader] ****************************** 2025-05-29 00:51:18.787797 | orchestrator | Thursday 29 May 2025 00:51:12 +0000 (0:00:00.657) 0:02:18.225 ********** 2025-05-29 00:51:18.787808 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:51:18.787819 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:51:18.787838 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:51:18.787857 | orchestrator | 2025-05-29 00:51:18.787887 | orchestrator | TASK [ovn-db : Configure OVN SB connection settings] *************************** 2025-05-29 00:51:18.787906 | orchestrator | Thursday 29 May 2025 00:51:13 +0000 (0:00:00.841) 0:02:19.067 ********** 2025-05-29 00:51:18.787924 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:51:18.787941 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:51:18.787959 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:51:18.787974 | orchestrator | 2025-05-29 00:51:18.787991 | orchestrator | TASK [ovn-db : Wait for ovn-nb-db] ********************************************* 2025-05-29 00:51:18.788009 | orchestrator | Thursday 29 May 2025 00:51:13 +0000 (0:00:00.768) 0:02:19.835 ********** 2025-05-29 00:51:18.788026 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:51:18.788043 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:51:18.788060 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:51:18.788076 | orchestrator | 2025-05-29 00:51:18.788094 | orchestrator | TASK [ovn-db : Wait for ovn-sb-db] ********************************************* 2025-05-29 00:51:18.788111 | orchestrator | Thursday 29 May 2025 00:51:14 +0000 (0:00:00.666) 0:02:20.501 ********** 2025-05-29 00:51:18.788128 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:51:18.788143 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:51:18.788160 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:51:18.788178 | orchestrator | 2025-05-29 00:51:18.788195 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-29 00:51:18.788213 | orchestrator | testbed-node-0 : ok=44  changed=18  unreachable=0 failed=0 skipped=20  rescued=0 ignored=0 2025-05-29 00:51:18.788231 | orchestrator | testbed-node-1 : ok=43  changed=18  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-05-29 00:51:18.788259 | orchestrator | testbed-node-2 : ok=43  changed=18  unreachable=0 failed=0 skipped=22  rescued=0 ignored=0 2025-05-29 00:51:18.788277 | orchestrator | testbed-node-3 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-29 00:51:18.788295 | orchestrator | testbed-node-4 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-29 00:51:18.788313 | orchestrator | testbed-node-5 : ok=12  changed=8  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-29 00:51:18.788339 | orchestrator | 2025-05-29 00:51:18.788350 | orchestrator | 2025-05-29 00:51:18.788359 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-29 00:51:18.788374 | orchestrator | Thursday 29 May 2025 00:51:15 +0000 (0:00:01.165) 0:02:21.666 ********** 2025-05-29 00:51:18.788391 | orchestrator | =============================================================================== 2025-05-29 00:51:18.788401 | orchestrator | ovn-controller : Restart ovn-controller container ---------------------- 21.47s 2025-05-29 00:51:18.788411 | orchestrator | ovn-controller : Configure OVN in OVSDB -------------------------------- 18.46s 2025-05-29 00:51:18.788421 | orchestrator | ovn-db : Restart ovn-nb-db container ----------------------------------- 14.28s 2025-05-29 00:51:18.788430 | orchestrator | ovn-db : Restart ovn-sb-db container ----------------------------------- 13.40s 2025-05-29 00:51:18.788442 | orchestrator | ovn-db : Restart ovn-northd container ---------------------------------- 13.10s 2025-05-29 00:51:18.788459 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.71s 2025-05-29 00:51:18.788475 | orchestrator | ovn-db : Copying over config.json files for services -------------------- 4.40s 2025-05-29 00:51:18.788492 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 3.31s 2025-05-29 00:51:18.788509 | orchestrator | ovn-controller : Create br-int bridge on OpenvSwitch -------------------- 3.18s 2025-05-29 00:51:18.788525 | orchestrator | ovn-controller : Copying over systemd override -------------------------- 2.72s 2025-05-29 00:51:18.788563 | orchestrator | ovn-db : Check ovn containers ------------------------------------------- 2.57s 2025-05-29 00:51:18.788577 | orchestrator | ovn-controller : Reload systemd config ---------------------------------- 2.17s 2025-05-29 00:51:18.788590 | orchestrator | ovn-controller : Copying over config.json files for services ------------ 1.97s 2025-05-29 00:51:18.788604 | orchestrator | ovn-controller : Ensuring systemd override directory exists ------------- 1.88s 2025-05-29 00:51:18.788619 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.66s 2025-05-29 00:51:18.788634 | orchestrator | Group hosts based on enabled services ----------------------------------- 1.55s 2025-05-29 00:51:18.788650 | orchestrator | ovn-db : Ensuring config directories exist ------------------------------ 1.50s 2025-05-29 00:51:18.788664 | orchestrator | ovn-controller : Check ovn-controller containers ------------------------ 1.36s 2025-05-29 00:51:18.788678 | orchestrator | ovn-controller : include_tasks ------------------------------------------ 1.34s 2025-05-29 00:51:18.788692 | orchestrator | ovn-controller : Ensuring config directories exist ---------------------- 1.29s 2025-05-29 00:51:18.788707 | orchestrator | 2025-05-29 00:51:18 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:51:21.826423 | orchestrator | 2025-05-29 00:51:21 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:51:21.832367 | orchestrator | 2025-05-29 00:51:21 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:51:21.836062 | orchestrator | 2025-05-29 00:51:21 | INFO  | Task 27dedc27-f6ad-47d3-9548-105ccdb7a61f is in state STARTED 2025-05-29 00:51:21.836104 | orchestrator | 2025-05-29 00:51:21 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:51:24.876462 | orchestrator | 2025-05-29 00:51:24 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:51:24.877885 | orchestrator | 2025-05-29 00:51:24 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:51:24.878752 | orchestrator | 2025-05-29 00:51:24 | INFO  | Task 27dedc27-f6ad-47d3-9548-105ccdb7a61f is in state STARTED 2025-05-29 00:51:24.878776 | orchestrator | 2025-05-29 00:51:24 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:51:27.920613 | orchestrator | 2025-05-29 00:51:27 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:51:27.922401 | orchestrator | 2025-05-29 00:51:27 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:51:27.922435 | orchestrator | 2025-05-29 00:51:27 | INFO  | Task 27dedc27-f6ad-47d3-9548-105ccdb7a61f is in state STARTED 2025-05-29 00:51:27.922448 | orchestrator | 2025-05-29 00:51:27 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:51:30.952375 | orchestrator | 2025-05-29 00:51:30 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:51:30.953035 | orchestrator | 2025-05-29 00:51:30 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:51:30.953887 | orchestrator | 2025-05-29 00:51:30 | INFO  | Task 27dedc27-f6ad-47d3-9548-105ccdb7a61f is in state STARTED 2025-05-29 00:51:30.953929 | orchestrator | 2025-05-29 00:51:30 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:51:34.001778 | orchestrator | 2025-05-29 00:51:33 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:51:34.001889 | orchestrator | 2025-05-29 00:51:34 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:51:34.003488 | orchestrator | 2025-05-29 00:51:34 | INFO  | Task 27dedc27-f6ad-47d3-9548-105ccdb7a61f is in state STARTED 2025-05-29 00:51:34.003797 | orchestrator | 2025-05-29 00:51:34 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:51:37.057339 | orchestrator | 2025-05-29 00:51:37 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:51:37.058603 | orchestrator | 2025-05-29 00:51:37 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:51:37.060580 | orchestrator | 2025-05-29 00:51:37 | INFO  | Task 27dedc27-f6ad-47d3-9548-105ccdb7a61f is in state STARTED 2025-05-29 00:51:37.061174 | orchestrator | 2025-05-29 00:51:37 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:51:40.108859 | orchestrator | 2025-05-29 00:51:40 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:51:40.108967 | orchestrator | 2025-05-29 00:51:40 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:51:40.108982 | orchestrator | 2025-05-29 00:51:40 | INFO  | Task 27dedc27-f6ad-47d3-9548-105ccdb7a61f is in state STARTED 2025-05-29 00:51:40.108995 | orchestrator | 2025-05-29 00:51:40 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:51:43.151226 | orchestrator | 2025-05-29 00:51:43 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:51:43.153169 | orchestrator | 2025-05-29 00:51:43 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:51:43.155912 | orchestrator | 2025-05-29 00:51:43 | INFO  | Task 27dedc27-f6ad-47d3-9548-105ccdb7a61f is in state STARTED 2025-05-29 00:51:43.156101 | orchestrator | 2025-05-29 00:51:43 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:51:46.223152 | orchestrator | 2025-05-29 00:51:46 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:51:46.223294 | orchestrator | 2025-05-29 00:51:46 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:51:46.224615 | orchestrator | 2025-05-29 00:51:46 | INFO  | Task 27dedc27-f6ad-47d3-9548-105ccdb7a61f is in state STARTED 2025-05-29 00:51:46.224644 | orchestrator | 2025-05-29 00:51:46 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:51:49.262840 | orchestrator | 2025-05-29 00:51:49 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:51:49.263984 | orchestrator | 2025-05-29 00:51:49 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:51:49.265730 | orchestrator | 2025-05-29 00:51:49 | INFO  | Task 27dedc27-f6ad-47d3-9548-105ccdb7a61f is in state STARTED 2025-05-29 00:51:49.265883 | orchestrator | 2025-05-29 00:51:49 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:51:52.319474 | orchestrator | 2025-05-29 00:51:52 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:51:52.324391 | orchestrator | 2025-05-29 00:51:52 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:51:52.325645 | orchestrator | 2025-05-29 00:51:52 | INFO  | Task 27dedc27-f6ad-47d3-9548-105ccdb7a61f is in state STARTED 2025-05-29 00:51:52.325667 | orchestrator | 2025-05-29 00:51:52 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:51:55.386952 | orchestrator | 2025-05-29 00:51:55 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:51:55.389212 | orchestrator | 2025-05-29 00:51:55 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:51:55.389247 | orchestrator | 2025-05-29 00:51:55 | INFO  | Task 27dedc27-f6ad-47d3-9548-105ccdb7a61f is in state STARTED 2025-05-29 00:51:55.389260 | orchestrator | 2025-05-29 00:51:55 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:51:58.448595 | orchestrator | 2025-05-29 00:51:58 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:51:58.451954 | orchestrator | 2025-05-29 00:51:58 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:51:58.453684 | orchestrator | 2025-05-29 00:51:58 | INFO  | Task 27dedc27-f6ad-47d3-9548-105ccdb7a61f is in state STARTED 2025-05-29 00:51:58.453919 | orchestrator | 2025-05-29 00:51:58 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:52:01.504044 | orchestrator | 2025-05-29 00:52:01 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:52:01.505354 | orchestrator | 2025-05-29 00:52:01 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:52:01.507137 | orchestrator | 2025-05-29 00:52:01 | INFO  | Task 27dedc27-f6ad-47d3-9548-105ccdb7a61f is in state STARTED 2025-05-29 00:52:01.507171 | orchestrator | 2025-05-29 00:52:01 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:52:04.558758 | orchestrator | 2025-05-29 00:52:04 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:52:04.558872 | orchestrator | 2025-05-29 00:52:04 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:52:04.559635 | orchestrator | 2025-05-29 00:52:04 | INFO  | Task 27dedc27-f6ad-47d3-9548-105ccdb7a61f is in state STARTED 2025-05-29 00:52:04.559671 | orchestrator | 2025-05-29 00:52:04 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:52:07.608042 | orchestrator | 2025-05-29 00:52:07 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:52:07.608828 | orchestrator | 2025-05-29 00:52:07 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:52:07.610295 | orchestrator | 2025-05-29 00:52:07 | INFO  | Task 27dedc27-f6ad-47d3-9548-105ccdb7a61f is in state STARTED 2025-05-29 00:52:07.610318 | orchestrator | 2025-05-29 00:52:07 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:52:10.657093 | orchestrator | 2025-05-29 00:52:10 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:52:10.657781 | orchestrator | 2025-05-29 00:52:10 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:52:10.659661 | orchestrator | 2025-05-29 00:52:10 | INFO  | Task 27dedc27-f6ad-47d3-9548-105ccdb7a61f is in state STARTED 2025-05-29 00:52:10.659700 | orchestrator | 2025-05-29 00:52:10 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:52:13.697437 | orchestrator | 2025-05-29 00:52:13 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:52:13.697582 | orchestrator | 2025-05-29 00:52:13 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:52:13.698149 | orchestrator | 2025-05-29 00:52:13 | INFO  | Task 27dedc27-f6ad-47d3-9548-105ccdb7a61f is in state STARTED 2025-05-29 00:52:13.698180 | orchestrator | 2025-05-29 00:52:13 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:52:16.746257 | orchestrator | 2025-05-29 00:52:16 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:52:16.746382 | orchestrator | 2025-05-29 00:52:16 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:52:16.749119 | orchestrator | 2025-05-29 00:52:16 | INFO  | Task 27dedc27-f6ad-47d3-9548-105ccdb7a61f is in state STARTED 2025-05-29 00:52:16.749184 | orchestrator | 2025-05-29 00:52:16 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:52:19.797029 | orchestrator | 2025-05-29 00:52:19 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:52:19.799570 | orchestrator | 2025-05-29 00:52:19 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:52:19.802213 | orchestrator | 2025-05-29 00:52:19 | INFO  | Task 27dedc27-f6ad-47d3-9548-105ccdb7a61f is in state STARTED 2025-05-29 00:52:19.802293 | orchestrator | 2025-05-29 00:52:19 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:52:22.860946 | orchestrator | 2025-05-29 00:52:22 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:52:22.863308 | orchestrator | 2025-05-29 00:52:22 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:52:22.864677 | orchestrator | 2025-05-29 00:52:22 | INFO  | Task 27dedc27-f6ad-47d3-9548-105ccdb7a61f is in state STARTED 2025-05-29 00:52:22.864832 | orchestrator | 2025-05-29 00:52:22 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:52:25.910103 | orchestrator | 2025-05-29 00:52:25 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:52:25.910308 | orchestrator | 2025-05-29 00:52:25 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:52:25.913511 | orchestrator | 2025-05-29 00:52:25 | INFO  | Task 27dedc27-f6ad-47d3-9548-105ccdb7a61f is in state STARTED 2025-05-29 00:52:25.913566 | orchestrator | 2025-05-29 00:52:25 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:52:28.956781 | orchestrator | 2025-05-29 00:52:28 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:52:28.958915 | orchestrator | 2025-05-29 00:52:28 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:52:28.959592 | orchestrator | 2025-05-29 00:52:28 | INFO  | Task 27dedc27-f6ad-47d3-9548-105ccdb7a61f is in state STARTED 2025-05-29 00:52:28.960043 | orchestrator | 2025-05-29 00:52:28 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:52:31.994433 | orchestrator | 2025-05-29 00:52:31 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:52:31.994608 | orchestrator | 2025-05-29 00:52:31 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:52:31.995016 | orchestrator | 2025-05-29 00:52:31 | INFO  | Task 27dedc27-f6ad-47d3-9548-105ccdb7a61f is in state STARTED 2025-05-29 00:52:31.995092 | orchestrator | 2025-05-29 00:52:31 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:52:35.040787 | orchestrator | 2025-05-29 00:52:35 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:52:35.040905 | orchestrator | 2025-05-29 00:52:35 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:52:35.041533 | orchestrator | 2025-05-29 00:52:35 | INFO  | Task 27dedc27-f6ad-47d3-9548-105ccdb7a61f is in state STARTED 2025-05-29 00:52:35.041559 | orchestrator | 2025-05-29 00:52:35 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:52:38.085029 | orchestrator | 2025-05-29 00:52:38 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:52:38.085262 | orchestrator | 2025-05-29 00:52:38 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:52:38.085977 | orchestrator | 2025-05-29 00:52:38 | INFO  | Task 27dedc27-f6ad-47d3-9548-105ccdb7a61f is in state STARTED 2025-05-29 00:52:38.086119 | orchestrator | 2025-05-29 00:52:38 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:52:41.125479 | orchestrator | 2025-05-29 00:52:41 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:52:41.126367 | orchestrator | 2025-05-29 00:52:41 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:52:41.128731 | orchestrator | 2025-05-29 00:52:41 | INFO  | Task 27dedc27-f6ad-47d3-9548-105ccdb7a61f is in state STARTED 2025-05-29 00:52:41.128860 | orchestrator | 2025-05-29 00:52:41 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:52:44.164942 | orchestrator | 2025-05-29 00:52:44 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:52:44.165033 | orchestrator | 2025-05-29 00:52:44 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:52:44.165744 | orchestrator | 2025-05-29 00:52:44 | INFO  | Task 27dedc27-f6ad-47d3-9548-105ccdb7a61f is in state STARTED 2025-05-29 00:52:44.165787 | orchestrator | 2025-05-29 00:52:44 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:52:47.207507 | orchestrator | 2025-05-29 00:52:47 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:52:47.207583 | orchestrator | 2025-05-29 00:52:47 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:52:47.209913 | orchestrator | 2025-05-29 00:52:47 | INFO  | Task 27dedc27-f6ad-47d3-9548-105ccdb7a61f is in state STARTED 2025-05-29 00:52:47.209934 | orchestrator | 2025-05-29 00:52:47 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:52:50.265637 | orchestrator | 2025-05-29 00:52:50 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:52:50.265765 | orchestrator | 2025-05-29 00:52:50 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:52:50.265860 | orchestrator | 2025-05-29 00:52:50 | INFO  | Task 27dedc27-f6ad-47d3-9548-105ccdb7a61f is in state STARTED 2025-05-29 00:52:50.265877 | orchestrator | 2025-05-29 00:52:50 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:52:53.303815 | orchestrator | 2025-05-29 00:52:53 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:52:53.305106 | orchestrator | 2025-05-29 00:52:53 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:52:53.306400 | orchestrator | 2025-05-29 00:52:53 | INFO  | Task 27dedc27-f6ad-47d3-9548-105ccdb7a61f is in state STARTED 2025-05-29 00:52:53.306431 | orchestrator | 2025-05-29 00:52:53 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:52:56.357052 | orchestrator | 2025-05-29 00:52:56 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:52:56.357173 | orchestrator | 2025-05-29 00:52:56 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:52:56.359061 | orchestrator | 2025-05-29 00:52:56 | INFO  | Task 27dedc27-f6ad-47d3-9548-105ccdb7a61f is in state STARTED 2025-05-29 00:52:56.359093 | orchestrator | 2025-05-29 00:52:56 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:52:59.398522 | orchestrator | 2025-05-29 00:52:59 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:52:59.399704 | orchestrator | 2025-05-29 00:52:59 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:52:59.402271 | orchestrator | 2025-05-29 00:52:59 | INFO  | Task 27dedc27-f6ad-47d3-9548-105ccdb7a61f is in state STARTED 2025-05-29 00:52:59.402319 | orchestrator | 2025-05-29 00:52:59 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:53:02.440863 | orchestrator | 2025-05-29 00:53:02 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:53:02.442893 | orchestrator | 2025-05-29 00:53:02 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:53:02.443736 | orchestrator | 2025-05-29 00:53:02 | INFO  | Task 27dedc27-f6ad-47d3-9548-105ccdb7a61f is in state STARTED 2025-05-29 00:53:02.443922 | orchestrator | 2025-05-29 00:53:02 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:53:05.502098 | orchestrator | 2025-05-29 00:53:05 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:53:05.503252 | orchestrator | 2025-05-29 00:53:05 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:53:05.505543 | orchestrator | 2025-05-29 00:53:05 | INFO  | Task 27dedc27-f6ad-47d3-9548-105ccdb7a61f is in state STARTED 2025-05-29 00:53:05.505571 | orchestrator | 2025-05-29 00:53:05 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:53:08.559840 | orchestrator | 2025-05-29 00:53:08 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:53:08.561002 | orchestrator | 2025-05-29 00:53:08 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:53:08.562824 | orchestrator | 2025-05-29 00:53:08 | INFO  | Task 27dedc27-f6ad-47d3-9548-105ccdb7a61f is in state STARTED 2025-05-29 00:53:08.562847 | orchestrator | 2025-05-29 00:53:08 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:53:11.605963 | orchestrator | 2025-05-29 00:53:11 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:53:11.606885 | orchestrator | 2025-05-29 00:53:11 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:53:11.608056 | orchestrator | 2025-05-29 00:53:11 | INFO  | Task 27dedc27-f6ad-47d3-9548-105ccdb7a61f is in state STARTED 2025-05-29 00:53:11.608948 | orchestrator | 2025-05-29 00:53:11 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:53:14.655364 | orchestrator | 2025-05-29 00:53:14 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:53:14.656009 | orchestrator | 2025-05-29 00:53:14 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:53:14.657625 | orchestrator | 2025-05-29 00:53:14 | INFO  | Task 27dedc27-f6ad-47d3-9548-105ccdb7a61f is in state STARTED 2025-05-29 00:53:14.657744 | orchestrator | 2025-05-29 00:53:14 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:53:17.723946 | orchestrator | 2025-05-29 00:53:17 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:53:17.724600 | orchestrator | 2025-05-29 00:53:17 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:53:17.726424 | orchestrator | 2025-05-29 00:53:17 | INFO  | Task 27dedc27-f6ad-47d3-9548-105ccdb7a61f is in state STARTED 2025-05-29 00:53:17.726501 | orchestrator | 2025-05-29 00:53:17 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:53:20.784645 | orchestrator | 2025-05-29 00:53:20 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:53:20.790354 | orchestrator | 2025-05-29 00:53:20 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:53:20.791813 | orchestrator | 2025-05-29 00:53:20 | INFO  | Task 27dedc27-f6ad-47d3-9548-105ccdb7a61f is in state STARTED 2025-05-29 00:53:20.791841 | orchestrator | 2025-05-29 00:53:20 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:53:23.833543 | orchestrator | 2025-05-29 00:53:23 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:53:23.833782 | orchestrator | 2025-05-29 00:53:23 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:53:23.834756 | orchestrator | 2025-05-29 00:53:23 | INFO  | Task 27dedc27-f6ad-47d3-9548-105ccdb7a61f is in state STARTED 2025-05-29 00:53:23.834788 | orchestrator | 2025-05-29 00:53:23 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:53:26.895248 | orchestrator | 2025-05-29 00:53:26 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:53:26.898440 | orchestrator | 2025-05-29 00:53:26 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:53:26.900561 | orchestrator | 2025-05-29 00:53:26 | INFO  | Task 27dedc27-f6ad-47d3-9548-105ccdb7a61f is in state STARTED 2025-05-29 00:53:26.900608 | orchestrator | 2025-05-29 00:53:26 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:53:29.949251 | orchestrator | 2025-05-29 00:53:29 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:53:29.951908 | orchestrator | 2025-05-29 00:53:29 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:53:29.954067 | orchestrator | 2025-05-29 00:53:29 | INFO  | Task 27dedc27-f6ad-47d3-9548-105ccdb7a61f is in state STARTED 2025-05-29 00:53:29.956086 | orchestrator | 2025-05-29 00:53:29 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:53:33.014576 | orchestrator | 2025-05-29 00:53:33 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:53:33.018171 | orchestrator | 2025-05-29 00:53:33 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:53:33.019489 | orchestrator | 2025-05-29 00:53:33 | INFO  | Task 27dedc27-f6ad-47d3-9548-105ccdb7a61f is in state STARTED 2025-05-29 00:53:33.019706 | orchestrator | 2025-05-29 00:53:33 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:53:36.073888 | orchestrator | 2025-05-29 00:53:36 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:53:36.074968 | orchestrator | 2025-05-29 00:53:36 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:53:36.077779 | orchestrator | 2025-05-29 00:53:36 | INFO  | Task 27dedc27-f6ad-47d3-9548-105ccdb7a61f is in state STARTED 2025-05-29 00:53:36.077879 | orchestrator | 2025-05-29 00:53:36 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:53:39.135324 | orchestrator | 2025-05-29 00:53:39 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:53:39.138547 | orchestrator | 2025-05-29 00:53:39 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:53:39.141128 | orchestrator | 2025-05-29 00:53:39 | INFO  | Task 27dedc27-f6ad-47d3-9548-105ccdb7a61f is in state STARTED 2025-05-29 00:53:39.141156 | orchestrator | 2025-05-29 00:53:39 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:53:42.198630 | orchestrator | 2025-05-29 00:53:42 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:53:42.198943 | orchestrator | 2025-05-29 00:53:42 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:53:42.200013 | orchestrator | 2025-05-29 00:53:42 | INFO  | Task 27dedc27-f6ad-47d3-9548-105ccdb7a61f is in state STARTED 2025-05-29 00:53:42.200038 | orchestrator | 2025-05-29 00:53:42 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:53:45.255899 | orchestrator | 2025-05-29 00:53:45 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:53:45.257777 | orchestrator | 2025-05-29 00:53:45 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:53:45.261604 | orchestrator | 2025-05-29 00:53:45 | INFO  | Task 27dedc27-f6ad-47d3-9548-105ccdb7a61f is in state STARTED 2025-05-29 00:53:45.262850 | orchestrator | 2025-05-29 00:53:45 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:53:48.321800 | orchestrator | 2025-05-29 00:53:48 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:53:48.322492 | orchestrator | 2025-05-29 00:53:48 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:53:48.324495 | orchestrator | 2025-05-29 00:53:48 | INFO  | Task 27dedc27-f6ad-47d3-9548-105ccdb7a61f is in state STARTED 2025-05-29 00:53:48.324544 | orchestrator | 2025-05-29 00:53:48 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:53:51.386536 | orchestrator | 2025-05-29 00:53:51 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:53:51.387096 | orchestrator | 2025-05-29 00:53:51 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:53:51.388831 | orchestrator | 2025-05-29 00:53:51 | INFO  | Task 27dedc27-f6ad-47d3-9548-105ccdb7a61f is in state STARTED 2025-05-29 00:53:51.389147 | orchestrator | 2025-05-29 00:53:51 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:53:54.448301 | orchestrator | 2025-05-29 00:53:54 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:53:54.449947 | orchestrator | 2025-05-29 00:53:54 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:53:54.451849 | orchestrator | 2025-05-29 00:53:54 | INFO  | Task 27dedc27-f6ad-47d3-9548-105ccdb7a61f is in state STARTED 2025-05-29 00:53:54.451881 | orchestrator | 2025-05-29 00:53:54 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:53:57.507776 | orchestrator | 2025-05-29 00:53:57 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:53:57.507893 | orchestrator | 2025-05-29 00:53:57 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:53:57.507910 | orchestrator | 2025-05-29 00:53:57 | INFO  | Task 27dedc27-f6ad-47d3-9548-105ccdb7a61f is in state STARTED 2025-05-29 00:53:57.507923 | orchestrator | 2025-05-29 00:53:57 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:54:00.554957 | orchestrator | 2025-05-29 00:54:00 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:54:00.555072 | orchestrator | 2025-05-29 00:54:00 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:54:00.555550 | orchestrator | 2025-05-29 00:54:00 | INFO  | Task 27dedc27-f6ad-47d3-9548-105ccdb7a61f is in state STARTED 2025-05-29 00:54:00.555584 | orchestrator | 2025-05-29 00:54:00 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:54:03.593292 | orchestrator | 2025-05-29 00:54:03 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:54:03.593831 | orchestrator | 2025-05-29 00:54:03 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:54:03.594989 | orchestrator | 2025-05-29 00:54:03 | INFO  | Task 27dedc27-f6ad-47d3-9548-105ccdb7a61f is in state STARTED 2025-05-29 00:54:03.595008 | orchestrator | 2025-05-29 00:54:03 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:54:06.641621 | orchestrator | 2025-05-29 00:54:06 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:54:06.648114 | orchestrator | 2025-05-29 00:54:06 | INFO  | Task 4f5dd080-01fc-4b42-a268-549b7c23d558 is in state STARTED 2025-05-29 00:54:06.648273 | orchestrator | 2025-05-29 00:54:06 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:54:06.651720 | orchestrator | 2025-05-29 00:54:06 | INFO  | Task 27dedc27-f6ad-47d3-9548-105ccdb7a61f is in state STARTED 2025-05-29 00:54:06.651777 | orchestrator | 2025-05-29 00:54:06 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:54:09.720841 | orchestrator | 2025-05-29 00:54:09 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:54:09.724021 | orchestrator | 2025-05-29 00:54:09 | INFO  | Task 4f5dd080-01fc-4b42-a268-549b7c23d558 is in state STARTED 2025-05-29 00:54:09.726133 | orchestrator | 2025-05-29 00:54:09 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:54:09.727554 | orchestrator | 2025-05-29 00:54:09 | INFO  | Task 27dedc27-f6ad-47d3-9548-105ccdb7a61f is in state STARTED 2025-05-29 00:54:09.727709 | orchestrator | 2025-05-29 00:54:09 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:54:12.779639 | orchestrator | 2025-05-29 00:54:12 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:54:12.782102 | orchestrator | 2025-05-29 00:54:12 | INFO  | Task 4f5dd080-01fc-4b42-a268-549b7c23d558 is in state STARTED 2025-05-29 00:54:12.785730 | orchestrator | 2025-05-29 00:54:12 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:54:12.788702 | orchestrator | 2025-05-29 00:54:12 | INFO  | Task 27dedc27-f6ad-47d3-9548-105ccdb7a61f is in state STARTED 2025-05-29 00:54:12.788793 | orchestrator | 2025-05-29 00:54:12 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:54:15.843896 | orchestrator | 2025-05-29 00:54:15 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:54:15.844736 | orchestrator | 2025-05-29 00:54:15 | INFO  | Task 4f5dd080-01fc-4b42-a268-549b7c23d558 is in state STARTED 2025-05-29 00:54:15.846576 | orchestrator | 2025-05-29 00:54:15 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:54:15.847997 | orchestrator | 2025-05-29 00:54:15 | INFO  | Task 27dedc27-f6ad-47d3-9548-105ccdb7a61f is in state STARTED 2025-05-29 00:54:15.848166 | orchestrator | 2025-05-29 00:54:15 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:54:18.898194 | orchestrator | 2025-05-29 00:54:18 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:54:18.898987 | orchestrator | 2025-05-29 00:54:18 | INFO  | Task 4f5dd080-01fc-4b42-a268-549b7c23d558 is in state SUCCESS 2025-05-29 00:54:18.901046 | orchestrator | 2025-05-29 00:54:18 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:54:18.903566 | orchestrator | 2025-05-29 00:54:18 | INFO  | Task 27dedc27-f6ad-47d3-9548-105ccdb7a61f is in state STARTED 2025-05-29 00:54:18.903612 | orchestrator | 2025-05-29 00:54:18 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:54:21.960970 | orchestrator | 2025-05-29 00:54:21 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:54:21.964757 | orchestrator | 2025-05-29 00:54:21 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:54:21.964933 | orchestrator | 2025-05-29 00:54:21 | INFO  | Task 27dedc27-f6ad-47d3-9548-105ccdb7a61f is in state STARTED 2025-05-29 00:54:21.964950 | orchestrator | 2025-05-29 00:54:21 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:54:25.021966 | orchestrator | 2025-05-29 00:54:25 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:54:25.022291 | orchestrator | 2025-05-29 00:54:25 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:54:25.023565 | orchestrator | 2025-05-29 00:54:25 | INFO  | Task 27dedc27-f6ad-47d3-9548-105ccdb7a61f is in state STARTED 2025-05-29 00:54:25.023602 | orchestrator | 2025-05-29 00:54:25 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:54:28.072582 | orchestrator | 2025-05-29 00:54:28 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:54:28.073992 | orchestrator | 2025-05-29 00:54:28 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:54:28.075592 | orchestrator | 2025-05-29 00:54:28 | INFO  | Task 27dedc27-f6ad-47d3-9548-105ccdb7a61f is in state STARTED 2025-05-29 00:54:28.075625 | orchestrator | 2025-05-29 00:54:28 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:54:31.133793 | orchestrator | 2025-05-29 00:54:31 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:54:31.133981 | orchestrator | 2025-05-29 00:54:31 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:54:31.135810 | orchestrator | 2025-05-29 00:54:31 | INFO  | Task 27dedc27-f6ad-47d3-9548-105ccdb7a61f is in state STARTED 2025-05-29 00:54:31.135861 | orchestrator | 2025-05-29 00:54:31 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:54:34.180133 | orchestrator | 2025-05-29 00:54:34 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:54:34.180490 | orchestrator | 2025-05-29 00:54:34 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:54:34.182630 | orchestrator | 2025-05-29 00:54:34 | INFO  | Task 27dedc27-f6ad-47d3-9548-105ccdb7a61f is in state STARTED 2025-05-29 00:54:34.182755 | orchestrator | 2025-05-29 00:54:34 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:54:37.246425 | orchestrator | 2025-05-29 00:54:37 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:54:37.248206 | orchestrator | 2025-05-29 00:54:37 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:54:37.249620 | orchestrator | 2025-05-29 00:54:37 | INFO  | Task 27dedc27-f6ad-47d3-9548-105ccdb7a61f is in state STARTED 2025-05-29 00:54:37.249892 | orchestrator | 2025-05-29 00:54:37 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:54:40.296989 | orchestrator | 2025-05-29 00:54:40 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:54:40.297185 | orchestrator | 2025-05-29 00:54:40 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:54:40.297830 | orchestrator | 2025-05-29 00:54:40 | INFO  | Task 27dedc27-f6ad-47d3-9548-105ccdb7a61f is in state STARTED 2025-05-29 00:54:40.297860 | orchestrator | 2025-05-29 00:54:40 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:54:43.341206 | orchestrator | 2025-05-29 00:54:43 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:54:43.342853 | orchestrator | 2025-05-29 00:54:43 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:54:43.345191 | orchestrator | 2025-05-29 00:54:43 | INFO  | Task 27dedc27-f6ad-47d3-9548-105ccdb7a61f is in state STARTED 2025-05-29 00:54:43.345301 | orchestrator | 2025-05-29 00:54:43 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:54:46.389035 | orchestrator | 2025-05-29 00:54:46 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:54:46.389866 | orchestrator | 2025-05-29 00:54:46 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:54:46.390734 | orchestrator | 2025-05-29 00:54:46 | INFO  | Task 27dedc27-f6ad-47d3-9548-105ccdb7a61f is in state STARTED 2025-05-29 00:54:46.390784 | orchestrator | 2025-05-29 00:54:46 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:54:49.447043 | orchestrator | 2025-05-29 00:54:49 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:54:49.448926 | orchestrator | 2025-05-29 00:54:49 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:54:49.454228 | orchestrator | 2025-05-29 00:54:49 | INFO  | Task 27dedc27-f6ad-47d3-9548-105ccdb7a61f is in state STARTED 2025-05-29 00:54:49.454273 | orchestrator | 2025-05-29 00:54:49 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:54:52.497789 | orchestrator | 2025-05-29 00:54:52 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:54:52.498129 | orchestrator | 2025-05-29 00:54:52 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:54:52.499231 | orchestrator | 2025-05-29 00:54:52 | INFO  | Task 27dedc27-f6ad-47d3-9548-105ccdb7a61f is in state STARTED 2025-05-29 00:54:52.499272 | orchestrator | 2025-05-29 00:54:52 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:54:55.544793 | orchestrator | 2025-05-29 00:54:55 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:54:55.548093 | orchestrator | 2025-05-29 00:54:55 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:54:55.552402 | orchestrator | 2025-05-29 00:54:55 | INFO  | Task 27dedc27-f6ad-47d3-9548-105ccdb7a61f is in state STARTED 2025-05-29 00:54:55.552433 | orchestrator | 2025-05-29 00:54:55 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:54:58.593258 | orchestrator | 2025-05-29 00:54:58 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:54:58.596104 | orchestrator | 2025-05-29 00:54:58 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:54:58.596634 | orchestrator | 2025-05-29 00:54:58 | INFO  | Task 27dedc27-f6ad-47d3-9548-105ccdb7a61f is in state STARTED 2025-05-29 00:54:58.596670 | orchestrator | 2025-05-29 00:54:58 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:55:01.653337 | orchestrator | 2025-05-29 00:55:01 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:55:01.653539 | orchestrator | 2025-05-29 00:55:01 | INFO  | Task 46006056-03f4-48e1-84f6-89ef7494b5c9 is in state STARTED 2025-05-29 00:55:01.655631 | orchestrator | 2025-05-29 00:55:01 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:55:01.658009 | orchestrator | 2025-05-29 00:55:01 | INFO  | Task 2efa315a-60c7-4140-b3e7-09ffa1c9cbdf is in state STARTED 2025-05-29 00:55:01.677655 | orchestrator | 2025-05-29 00:55:01 | INFO  | Task 27dedc27-f6ad-47d3-9548-105ccdb7a61f is in state SUCCESS 2025-05-29 00:55:01.679443 | orchestrator | 2025-05-29 00:55:01.679485 | orchestrator | None 2025-05-29 00:55:01.679498 | orchestrator | 2025-05-29 00:55:01.679510 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-29 00:55:01.679523 | orchestrator | 2025-05-29 00:55:01.679534 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-29 00:55:01.679546 | orchestrator | Thursday 29 May 2025 00:47:37 +0000 (0:00:00.392) 0:00:00.392 ********** 2025-05-29 00:55:01.679557 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:55:01.679570 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:55:01.679582 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:55:01.679593 | orchestrator | 2025-05-29 00:55:01.679605 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-29 00:55:01.679617 | orchestrator | Thursday 29 May 2025 00:47:38 +0000 (0:00:00.866) 0:00:01.258 ********** 2025-05-29 00:55:01.679629 | orchestrator | ok: [testbed-node-0] => (item=enable_loadbalancer_True) 2025-05-29 00:55:01.679640 | orchestrator | ok: [testbed-node-1] => (item=enable_loadbalancer_True) 2025-05-29 00:55:01.679652 | orchestrator | ok: [testbed-node-2] => (item=enable_loadbalancer_True) 2025-05-29 00:55:01.679663 | orchestrator | 2025-05-29 00:55:01.679675 | orchestrator | PLAY [Apply role loadbalancer] ************************************************* 2025-05-29 00:55:01.679686 | orchestrator | 2025-05-29 00:55:01.679697 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-05-29 00:55:01.679709 | orchestrator | Thursday 29 May 2025 00:47:39 +0000 (0:00:00.385) 0:00:01.644 ********** 2025-05-29 00:55:01.679721 | orchestrator | included: /ansible/roles/loadbalancer/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-29 00:55:01.679733 | orchestrator | 2025-05-29 00:55:01.679745 | orchestrator | TASK [loadbalancer : Check IPv6 support] *************************************** 2025-05-29 00:55:01.679756 | orchestrator | Thursday 29 May 2025 00:47:40 +0000 (0:00:01.487) 0:00:03.131 ********** 2025-05-29 00:55:01.679768 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:55:01.679779 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:55:01.679791 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:55:01.679818 | orchestrator | 2025-05-29 00:55:01.679829 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-05-29 00:55:01.679840 | orchestrator | Thursday 29 May 2025 00:47:41 +0000 (0:00:01.103) 0:00:04.235 ********** 2025-05-29 00:55:01.679851 | orchestrator | included: sysctl for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-29 00:55:01.679862 | orchestrator | 2025-05-29 00:55:01.679873 | orchestrator | TASK [sysctl : Check IPv6 support] ********************************************* 2025-05-29 00:55:01.679884 | orchestrator | Thursday 29 May 2025 00:47:42 +0000 (0:00:01.006) 0:00:05.241 ********** 2025-05-29 00:55:01.679895 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:55:01.679906 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:55:01.679917 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:55:01.679929 | orchestrator | 2025-05-29 00:55:01.679940 | orchestrator | TASK [sysctl : Setting sysctl values] ****************************************** 2025-05-29 00:55:01.679959 | orchestrator | Thursday 29 May 2025 00:47:44 +0000 (0:00:01.486) 0:00:06.728 ********** 2025-05-29 00:55:01.679970 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-05-29 00:55:01.679982 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-05-29 00:55:01.680001 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-05-29 00:55:01.680021 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-05-29 00:55:01.680067 | orchestrator | ok: [testbed-node-1] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-05-29 00:55:01.680087 | orchestrator | ok: [testbed-node-2] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-05-29 00:55:01.680106 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv6.ip_nonlocal_bind', 'value': 1}) 2025-05-29 00:55:01.680125 | orchestrator | changed: [testbed-node-2] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-05-29 00:55:01.680145 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.ipv4.ip_nonlocal_bind', 'value': 1}) 2025-05-29 00:55:01.680176 | orchestrator | ok: [testbed-node-0] => (item={'name': 'net.ipv4.tcp_retries2', 'value': 'KOLLA_UNSET'}) 2025-05-29 00:55:01.680208 | orchestrator | changed: [testbed-node-1] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-05-29 00:55:01.680235 | orchestrator | changed: [testbed-node-0] => (item={'name': 'net.unix.max_dgram_qlen', 'value': 128}) 2025-05-29 00:55:01.680248 | orchestrator | 2025-05-29 00:55:01.680261 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-05-29 00:55:01.680274 | orchestrator | Thursday 29 May 2025 00:47:48 +0000 (0:00:04.420) 0:00:11.148 ********** 2025-05-29 00:55:01.680287 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-05-29 00:55:01.680301 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-05-29 00:55:01.680313 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-05-29 00:55:01.680326 | orchestrator | 2025-05-29 00:55:01.680339 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-05-29 00:55:01.680389 | orchestrator | Thursday 29 May 2025 00:47:49 +0000 (0:00:01.275) 0:00:12.424 ********** 2025-05-29 00:55:01.680403 | orchestrator | changed: [testbed-node-1] => (item=ip_vs) 2025-05-29 00:55:01.680416 | orchestrator | changed: [testbed-node-2] => (item=ip_vs) 2025-05-29 00:55:01.680427 | orchestrator | changed: [testbed-node-0] => (item=ip_vs) 2025-05-29 00:55:01.680438 | orchestrator | 2025-05-29 00:55:01.680449 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-05-29 00:55:01.680461 | orchestrator | Thursday 29 May 2025 00:47:51 +0000 (0:00:01.743) 0:00:14.167 ********** 2025-05-29 00:55:01.680472 | orchestrator | skipping: [testbed-node-0] => (item=ip_vs)  2025-05-29 00:55:01.680483 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:55:01.680511 | orchestrator | skipping: [testbed-node-1] => (item=ip_vs)  2025-05-29 00:55:01.680523 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:55:01.680534 | orchestrator | skipping: [testbed-node-2] => (item=ip_vs)  2025-05-29 00:55:01.680544 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:55:01.680555 | orchestrator | 2025-05-29 00:55:01.680566 | orchestrator | TASK [loadbalancer : Ensuring config directories exist] ************************ 2025-05-29 00:55:01.680577 | orchestrator | Thursday 29 May 2025 00:47:52 +0000 (0:00:00.782) 0:00:14.950 ********** 2025-05-29 00:55:01.680592 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-05-29 00:55:01.680610 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-05-29 00:55:01.680633 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-29 00:55:01.680652 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-05-29 00:55:01.680666 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-29 00:55:01.680686 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__bb26e7d09620e6f8b5d16b58c488cb8122779f17', '__omit_place_holder__bb26e7d09620e6f8b5d16b58c488cb8122779f17'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-29 00:55:01.680699 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-29 00:55:01.680710 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-29 00:55:01.680729 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-29 00:55:01.680741 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__bb26e7d09620e6f8b5d16b58c488cb8122779f17', '__omit_place_holder__bb26e7d09620e6f8b5d16b58c488cb8122779f17'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-29 00:55:01.680757 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-29 00:55:01.680769 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__bb26e7d09620e6f8b5d16b58c488cb8122779f17', '__omit_place_holder__bb26e7d09620e6f8b5d16b58c488cb8122779f17'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-29 00:55:01.680787 | orchestrator | 2025-05-29 00:55:01.680798 | orchestrator | TASK [loadbalancer : Ensuring haproxy service config subdir exists] ************ 2025-05-29 00:55:01.680809 | orchestrator | Thursday 29 May 2025 00:47:55 +0000 (0:00:03.410) 0:00:18.361 ********** 2025-05-29 00:55:01.680820 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:55:01.680832 | orchestrator | changed: [testbed-node-1] 2025-05-29 00:55:01.680842 | orchestrator | changed: [testbed-node-2] 2025-05-29 00:55:01.680853 | orchestrator | 2025-05-29 00:55:01.680870 | orchestrator | TASK [loadbalancer : Ensuring proxysql service config subdirectories exist] **** 2025-05-29 00:55:01.680882 | orchestrator | Thursday 29 May 2025 00:47:58 +0000 (0:00:02.398) 0:00:20.759 ********** 2025-05-29 00:55:01.680892 | orchestrator | changed: [testbed-node-1] => (item=users) 2025-05-29 00:55:01.680903 | orchestrator | changed: [testbed-node-0] => (item=users) 2025-05-29 00:55:01.680914 | orchestrator | changed: [testbed-node-2] => (item=users) 2025-05-29 00:55:01.680925 | orchestrator | changed: [testbed-node-0] => (item=rules) 2025-05-29 00:55:01.680936 | orchestrator | changed: [testbed-node-1] => (item=rules) 2025-05-29 00:55:01.680946 | orchestrator | changed: [testbed-node-2] => (item=rules) 2025-05-29 00:55:01.680957 | orchestrator | 2025-05-29 00:55:01.680968 | orchestrator | TASK [loadbalancer : Ensuring keepalived checks subdir exists] ***************** 2025-05-29 00:55:01.680979 | orchestrator | Thursday 29 May 2025 00:48:00 +0000 (0:00:02.275) 0:00:23.035 ********** 2025-05-29 00:55:01.680996 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:55:01.681007 | orchestrator | changed: [testbed-node-1] 2025-05-29 00:55:01.681018 | orchestrator | changed: [testbed-node-2] 2025-05-29 00:55:01.681028 | orchestrator | 2025-05-29 00:55:01.681039 | orchestrator | TASK [loadbalancer : Remove mariadb.cfg if proxysql enabled] ******************* 2025-05-29 00:55:01.681050 | orchestrator | Thursday 29 May 2025 00:48:02 +0000 (0:00:02.062) 0:00:25.098 ********** 2025-05-29 00:55:01.681070 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:55:01.681081 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:55:01.681092 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:55:01.681102 | orchestrator | 2025-05-29 00:55:01.681113 | orchestrator | TASK [loadbalancer : Removing checks for services which are disabled] ********** 2025-05-29 00:55:01.681124 | orchestrator | Thursday 29 May 2025 00:48:04 +0000 (0:00:01.659) 0:00:26.757 ********** 2025-05-29 00:55:01.681136 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-05-29 00:55:01.681148 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-05-29 00:55:01.681167 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-29 00:55:01.681179 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-05-29 00:55:01.681197 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-29 00:55:01.681217 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-29 00:55:01.681228 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__bb26e7d09620e6f8b5d16b58c488cb8122779f17', '__omit_place_holder__bb26e7d09620e6f8b5d16b58c488cb8122779f17'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-29 00:55:01.681240 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:55:01.681251 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-29 00:55:01.681263 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-29 00:55:01.681279 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__bb26e7d09620e6f8b5d16b58c488cb8122779f17', '__omit_place_holder__bb26e7d09620e6f8b5d16b58c488cb8122779f17'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-29 00:55:01.681291 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:55:01.681303 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-29 00:55:01.681327 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__bb26e7d09620e6f8b5d16b58c488cb8122779f17', '__omit_place_holder__bb26e7d09620e6f8b5d16b58c488cb8122779f17'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-29 00:55:01.681339 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:55:01.681373 | orchestrator | 2025-05-29 00:55:01.681384 | orchestrator | TASK [loadbalancer : Copying checks for services which are enabled] ************ 2025-05-29 00:55:01.681395 | orchestrator | Thursday 29 May 2025 00:48:06 +0000 (0:00:02.416) 0:00:29.174 ********** 2025-05-29 00:55:01.681407 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-05-29 00:55:01.681419 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-05-29 00:55:01.681430 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-05-29 00:55:01.681447 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-29 00:55:01.681464 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-29 00:55:01.681483 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-29 00:55:01.681495 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-29 00:55:01.681506 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-29 00:55:01.681518 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__bb26e7d09620e6f8b5d16b58c488cb8122779f17', '__omit_place_holder__bb26e7d09620e6f8b5d16b58c488cb8122779f17'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-29 00:55:01.681539 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__bb26e7d09620e6f8b5d16b58c488cb8122779f17', '__omit_place_holder__bb26e7d09620e6f8b5d16b58c488cb8122779f17'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-29 00:55:01.681552 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-29 00:55:01.681576 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__bb26e7d09620e6f8b5d16b58c488cb8122779f17', '__omit_place_holder__bb26e7d09620e6f8b5d16b58c488cb8122779f17'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-29 00:55:01.681588 | orchestrator | 2025-05-29 00:55:01.681600 | orchestrator | TASK [loadbalancer : Copying over config.json files for services] ************** 2025-05-29 00:55:01.681611 | orchestrator | Thursday 29 May 2025 00:48:11 +0000 (0:00:04.575) 0:00:33.749 ********** 2025-05-29 00:55:01.681623 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-05-29 00:55:01.681634 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-05-29 00:55:01.681646 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-05-29 00:55:01.681662 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-29 00:55:01.681674 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-29 00:55:01.681944 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-29 00:55:01.681961 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-29 00:55:01.681972 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__bb26e7d09620e6f8b5d16b58c488cb8122779f17', '__omit_place_holder__bb26e7d09620e6f8b5d16b58c488cb8122779f17'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-29 00:55:01.681984 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-29 00:55:01.681995 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__bb26e7d09620e6f8b5d16b58c488cb8122779f17', '__omit_place_holder__bb26e7d09620e6f8b5d16b58c488cb8122779f17'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-29 00:55:01.682095 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-29 00:55:01.682122 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__bb26e7d09620e6f8b5d16b58c488cb8122779f17', '__omit_place_holder__bb26e7d09620e6f8b5d16b58c488cb8122779f17'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-29 00:55:01.682134 | orchestrator | 2025-05-29 00:55:01.682146 | orchestrator | TASK [loadbalancer : Copying over haproxy.cfg] ********************************* 2025-05-29 00:55:01.682157 | orchestrator | Thursday 29 May 2025 00:48:14 +0000 (0:00:03.251) 0:00:37.001 ********** 2025-05-29 00:55:01.682175 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-05-29 00:55:01.682187 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-05-29 00:55:01.682199 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_main.cfg.j2) 2025-05-29 00:55:01.682210 | orchestrator | 2025-05-29 00:55:01.682221 | orchestrator | TASK [loadbalancer : Copying over proxysql config] ***************************** 2025-05-29 00:55:01.682232 | orchestrator | Thursday 29 May 2025 00:48:17 +0000 (0:00:02.863) 0:00:39.864 ********** 2025-05-29 00:55:01.682243 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-05-29 00:55:01.682254 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-05-29 00:55:01.682265 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql.yaml.j2) 2025-05-29 00:55:01.682276 | orchestrator | 2025-05-29 00:55:01.682287 | orchestrator | TASK [loadbalancer : Copying over haproxy single external frontend config] ***** 2025-05-29 00:55:01.682298 | orchestrator | Thursday 29 May 2025 00:48:21 +0000 (0:00:04.298) 0:00:44.163 ********** 2025-05-29 00:55:01.682309 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:55:01.682320 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:55:01.682331 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:55:01.682342 | orchestrator | 2025-05-29 00:55:01.682409 | orchestrator | TASK [loadbalancer : Copying over custom haproxy services configuration] ******* 2025-05-29 00:55:01.682421 | orchestrator | Thursday 29 May 2025 00:48:22 +0000 (0:00:00.844) 0:00:45.007 ********** 2025-05-29 00:55:01.682432 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-05-29 00:55:01.682445 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-05-29 00:55:01.682456 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/haproxy/services.d/haproxy.cfg) 2025-05-29 00:55:01.682467 | orchestrator | 2025-05-29 00:55:01.682477 | orchestrator | TASK [loadbalancer : Copying over keepalived.conf] ***************************** 2025-05-29 00:55:01.682488 | orchestrator | Thursday 29 May 2025 00:48:25 +0000 (0:00:02.672) 0:00:47.680 ********** 2025-05-29 00:55:01.682499 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-05-29 00:55:01.682510 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-05-29 00:55:01.682521 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/keepalived/keepalived.conf.j2) 2025-05-29 00:55:01.682539 | orchestrator | 2025-05-29 00:55:01.682550 | orchestrator | TASK [loadbalancer : Copying over haproxy.pem] ********************************* 2025-05-29 00:55:01.682561 | orchestrator | Thursday 29 May 2025 00:48:27 +0000 (0:00:02.491) 0:00:50.171 ********** 2025-05-29 00:55:01.682573 | orchestrator | changed: [testbed-node-0] => (item=haproxy.pem) 2025-05-29 00:55:01.682587 | orchestrator | changed: [testbed-node-1] => (item=haproxy.pem) 2025-05-29 00:55:01.682600 | orchestrator | changed: [testbed-node-2] => (item=haproxy.pem) 2025-05-29 00:55:01.682612 | orchestrator | 2025-05-29 00:55:01.682624 | orchestrator | TASK [loadbalancer : Copying over haproxy-internal.pem] ************************ 2025-05-29 00:55:01.682637 | orchestrator | Thursday 29 May 2025 00:48:30 +0000 (0:00:02.816) 0:00:52.987 ********** 2025-05-29 00:55:01.682650 | orchestrator | changed: [testbed-node-0] => (item=haproxy-internal.pem) 2025-05-29 00:55:01.682663 | orchestrator | changed: [testbed-node-2] => (item=haproxy-internal.pem) 2025-05-29 00:55:01.682676 | orchestrator | changed: [testbed-node-1] => (item=haproxy-internal.pem) 2025-05-29 00:55:01.682689 | orchestrator | 2025-05-29 00:55:01.682707 | orchestrator | TASK [loadbalancer : include_tasks] ******************************************** 2025-05-29 00:55:01.682720 | orchestrator | Thursday 29 May 2025 00:48:32 +0000 (0:00:02.090) 0:00:55.078 ********** 2025-05-29 00:55:01.682733 | orchestrator | included: /ansible/roles/loadbalancer/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-29 00:55:01.682747 | orchestrator | 2025-05-29 00:55:01.682759 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over extra CA certificates] *** 2025-05-29 00:55:01.682773 | orchestrator | Thursday 29 May 2025 00:48:33 +0000 (0:00:00.838) 0:00:55.916 ********** 2025-05-29 00:55:01.682786 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-05-29 00:55:01.682814 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-05-29 00:55:01.682829 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-05-29 00:55:01.682842 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-29 00:55:01.682861 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-29 00:55:01.682877 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-29 00:55:01.682889 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-29 00:55:01.682908 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-29 00:55:01.682921 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-29 00:55:01.682931 | orchestrator | 2025-05-29 00:55:01.682941 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS certificate] *** 2025-05-29 00:55:01.682951 | orchestrator | Thursday 29 May 2025 00:48:36 +0000 (0:00:03.431) 0:00:59.347 ********** 2025-05-29 00:55:01.682961 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-05-29 00:55:01.682978 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-29 00:55:01.682988 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-29 00:55:01.682998 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:55:01.683012 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-05-29 00:55:01.683023 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-29 00:55:01.683039 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-29 00:55:01.683050 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:55:01.683068 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-05-29 00:55:01.683084 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-29 00:55:01.683094 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-29 00:55:01.683104 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:55:01.683114 | orchestrator | 2025-05-29 00:55:01.683124 | orchestrator | TASK [service-cert-copy : loadbalancer | Copying over backend internal TLS key] *** 2025-05-29 00:55:01.683134 | orchestrator | Thursday 29 May 2025 00:48:37 +0000 (0:00:00.860) 0:01:00.208 ********** 2025-05-29 00:55:01.683148 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}})  2025-05-29 00:55:01.683159 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-29 00:55:01.683174 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-29 00:55:01.683184 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:55:01.683194 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}})  2025-05-29 00:55:01.683210 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-29 00:55:01.683220 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-29 00:55:01.683230 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:55:01.683241 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}})  2025-05-29 00:55:01.683259 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}})  2025-05-29 00:55:01.683269 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}})  2025-05-29 00:55:01.683280 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:55:01.683289 | orchestrator | 2025-05-29 00:55:01.683300 | orchestrator | TASK [loadbalancer : Copying over haproxy start script] ************************ 2025-05-29 00:55:01.683314 | orchestrator | Thursday 29 May 2025 00:48:38 +0000 (0:00:01.253) 0:01:01.461 ********** 2025-05-29 00:55:01.683324 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-05-29 00:55:01.683334 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-05-29 00:55:01.683362 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/haproxy/haproxy_run.sh.j2) 2025-05-29 00:55:01.683379 | orchestrator | 2025-05-29 00:55:01.683389 | orchestrator | TASK [loadbalancer : Copying over proxysql start script] *********************** 2025-05-29 00:55:01.683399 | orchestrator | Thursday 29 May 2025 00:48:40 +0000 (0:00:01.823) 0:01:03.285 ********** 2025-05-29 00:55:01.683408 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-05-29 00:55:01.683418 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-05-29 00:55:01.683428 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/loadbalancer/templates/proxysql/proxysql_run.sh.j2) 2025-05-29 00:55:01.683437 | orchestrator | 2025-05-29 00:55:01.683447 | orchestrator | TASK [loadbalancer : Copying files for haproxy-ssh] **************************** 2025-05-29 00:55:01.683457 | orchestrator | Thursday 29 May 2025 00:48:43 +0000 (0:00:02.725) 0:01:06.010 ********** 2025-05-29 00:55:01.683466 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-05-29 00:55:01.683476 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-05-29 00:55:01.683486 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/sshd_config.j2', 'dest': 'sshd_config'})  2025-05-29 00:55:01.683495 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-05-29 00:55:01.683505 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:55:01.683515 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-05-29 00:55:01.683524 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:55:01.683534 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'haproxy-ssh/id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-05-29 00:55:01.683544 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:55:01.683554 | orchestrator | 2025-05-29 00:55:01.683563 | orchestrator | TASK [loadbalancer : Check loadbalancer containers] **************************** 2025-05-29 00:55:01.683573 | orchestrator | Thursday 29 May 2025 00:48:45 +0000 (0:00:01.675) 0:01:07.685 ********** 2025-05-29 00:55:01.683583 | orchestrator | changed: [testbed-node-1] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:61313'], 'timeout': '30'}}}) 2025-05-29 00:55:01.683598 | orchestrator | changed: [testbed-node-0] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:61313'], 'timeout': '30'}}}) 2025-05-29 00:55:01.683609 | orchestrator | changed: [testbed-node-2] => (item={'key': 'haproxy', 'value': {'container_name': 'haproxy', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/haproxy:2.4.24.20241206', 'privileged': True, 'volumes': ['/etc/kolla/haproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'letsencrypt_certificates:/etc/haproxy/certificates'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:61313'], 'timeout': '30'}}}) 2025-05-29 00:55:01.683631 | orchestrator | changed: [testbed-node-1] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-29 00:55:01.683642 | orchestrator | changed: [testbed-node-0] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-29 00:55:01.683652 | orchestrator | changed: [testbed-node-2] => (item={'key': 'proxysql', 'value': {'container_name': 'proxysql', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/proxysql:2.6.6.20241206', 'privileged': False, 'volumes': ['/etc/kolla/proxysql/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'proxysql:/var/lib/proxysql/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen proxysql 6032'], 'timeout': '30'}}}) 2025-05-29 00:55:01.683662 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-29 00:55:01.683672 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__bb26e7d09620e6f8b5d16b58c488cb8122779f17', '__omit_place_holder__bb26e7d09620e6f8b5d16b58c488cb8122779f17'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-29 00:55:01.683683 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-29 00:55:01.683703 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keepalived', 'value': {'container_name': 'keepalived', 'group': 'loadbalancer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keepalived:2.2.4.20241206', 'privileged': True, 'volumes': ['/etc/kolla/keepalived/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', 'proxysql_socket:/var/lib/kolla/proxysql/'], 'dimensions': {}}}) 2025-05-29 00:55:01.685124 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__bb26e7d09620e6f8b5d16b58c488cb8122779f17', '__omit_place_holder__bb26e7d09620e6f8b5d16b58c488cb8122779f17'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-29 00:55:01.685177 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'haproxy-ssh', 'value': {'container_name': 'haproxy_ssh', 'group': 'loadbalancer', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/haproxy-ssh:8.9.20241206', 'volumes': ['/etc/kolla/haproxy-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'haproxy_socket:/var/lib/kolla/haproxy/', '__omit_place_holder__bb26e7d09620e6f8b5d16b58c488cb8122779f17', '__omit_place_holder__bb26e7d09620e6f8b5d16b58c488cb8122779f17'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 2985'], 'timeout': '30'}}})  2025-05-29 00:55:01.685188 | orchestrator | 2025-05-29 00:55:01.685198 | orchestrator | TASK [include_role : aodh] ***************************************************** 2025-05-29 00:55:01.685234 | orchestrator | Thursday 29 May 2025 00:48:48 +0000 (0:00:03.032) 0:01:10.718 ********** 2025-05-29 00:55:01.685245 | orchestrator | included: aodh for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-29 00:55:01.685254 | orchestrator | 2025-05-29 00:55:01.685270 | orchestrator | TASK [haproxy-config : Copying over aodh haproxy config] *********************** 2025-05-29 00:55:01.685280 | orchestrator | Thursday 29 May 2025 00:48:49 +0000 (0:00:00.841) 0:01:11.559 ********** 2025-05-29 00:55:01.685291 | orchestrator | changed: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-05-29 00:55:01.685417 | orchestrator | changed: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-05-29 00:55:01.685454 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-05-29 00:55:01.685477 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-05-29 00:55:01.685488 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-05-29 00:55:01.685498 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-05-29 00:55:01.685509 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-05-29 00:55:01.685519 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-05-29 00:55:01.685533 | orchestrator | changed: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}}) 2025-05-29 00:55:01.685556 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-05-29 00:55:01.685567 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-05-29 00:55:01.685577 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-05-29 00:55:01.685587 | orchestrator | 2025-05-29 00:55:01.685597 | orchestrator | TASK [haproxy-config : Add configuration for aodh when using single external frontend] *** 2025-05-29 00:55:01.685607 | orchestrator | Thursday 29 May 2025 00:48:53 +0000 (0:00:04.729) 0:01:16.288 ********** 2025-05-29 00:55:01.685618 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-05-29 00:55:01.685633 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-05-29 00:55:01.685648 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-05-29 00:55:01.685664 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-05-29 00:55:01.685675 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:55:01.685685 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-05-29 00:55:01.685695 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-05-29 00:55:01.685705 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-05-29 00:55:01.685715 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-05-29 00:55:01.685736 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:55:01.685751 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-api', 'value': {'container_name': 'aodh_api', 'group': 'aodh-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-api:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'aodh:/var/lib/aodh/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8042'], 'timeout': '30'}, 'haproxy': {'aodh_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}, 'aodh_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}}}})  2025-05-29 00:55:01.685767 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-evaluator', 'value': {'container_name': 'aodh_evaluator', 'group': 'aodh-evaluator', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-evaluator:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-evaluator/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-evaluator 3306'], 'timeout': '30'}}})  2025-05-29 00:55:01.685777 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-listener', 'value': {'container_name': 'aodh_listener', 'group': 'aodh-listener', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-listener 5672'], 'timeout': '30'}}})  2025-05-29 00:55:01.685787 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh-notifier', 'value': {'container_name': 'aodh_notifier', 'group': 'aodh-notifier', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/aodh-notifier:18.0.1.20241206', 'volumes': ['/etc/kolla/aodh-notifier/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port aodh-notifier 5672'], 'timeout': '30'}}})  2025-05-29 00:55:01.685803 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:55:01.685813 | orchestrator | 2025-05-29 00:55:01.685823 | orchestrator | TASK [haproxy-config : Configuring firewall for aodh] ************************** 2025-05-29 00:55:01.685832 | orchestrator | Thursday 29 May 2025 00:48:54 +0000 (0:00:01.065) 0:01:17.354 ********** 2025-05-29 00:55:01.685843 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-05-29 00:55:01.685855 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-05-29 00:55:01.685865 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:55:01.685874 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-05-29 00:55:01.685890 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-05-29 00:55:01.685900 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:55:01.685909 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8042', 'listen_port': '8042'}})  2025-05-29 00:55:01.685920 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'aodh_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8042', 'listen_port': '8042'}})  2025-05-29 00:55:01.685927 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:55:01.685936 | orchestrator | 2025-05-29 00:55:01.685947 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL users config] *************** 2025-05-29 00:55:01.685956 | orchestrator | Thursday 29 May 2025 00:48:56 +0000 (0:00:01.561) 0:01:18.916 ********** 2025-05-29 00:55:01.685964 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:55:01.685972 | orchestrator | changed: [testbed-node-1] 2025-05-29 00:55:01.685979 | orchestrator | changed: [testbed-node-2] 2025-05-29 00:55:01.685987 | orchestrator | 2025-05-29 00:55:01.685995 | orchestrator | TASK [proxysql-config : Copying over aodh ProxySQL rules config] *************** 2025-05-29 00:55:01.686003 | orchestrator | Thursday 29 May 2025 00:48:57 +0000 (0:00:01.338) 0:01:20.255 ********** 2025-05-29 00:55:01.686011 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:55:01.686060 | orchestrator | changed: [testbed-node-1] 2025-05-29 00:55:01.686069 | orchestrator | changed: [testbed-node-2] 2025-05-29 00:55:01.686077 | orchestrator | 2025-05-29 00:55:01.686085 | orchestrator | TASK [include_role : barbican] ************************************************* 2025-05-29 00:55:01.686093 | orchestrator | Thursday 29 May 2025 00:49:00 +0000 (0:00:02.272) 0:01:22.527 ********** 2025-05-29 00:55:01.686101 | orchestrator | included: barbican for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-29 00:55:01.686109 | orchestrator | 2025-05-29 00:55:01.686117 | orchestrator | TASK [haproxy-config : Copying over barbican haproxy config] ******************* 2025-05-29 00:55:01.686124 | orchestrator | Thursday 29 May 2025 00:49:01 +0000 (0:00:01.013) 0:01:23.540 ********** 2025-05-29 00:55:01.686141 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-29 00:55:01.686152 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-29 00:55:01.686166 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-29 00:55:01.686179 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-29 00:55:01.686187 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-29 00:55:01.686215 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-29 00:55:01.686224 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-29 00:55:01.686237 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-29 00:55:01.686245 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-29 00:55:01.686253 | orchestrator | 2025-05-29 00:55:01.686262 | orchestrator | TASK [haproxy-config : Add configuration for barbican when using single external frontend] *** 2025-05-29 00:55:01.686270 | orchestrator | Thursday 29 May 2025 00:49:06 +0000 (0:00:05.289) 0:01:28.829 ********** 2025-05-29 00:55:01.686284 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-29 00:55:01.686305 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-29 00:55:01.686314 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-29 00:55:01.686322 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:55:01.686335 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-29 00:55:01.686358 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-29 00:55:01.686371 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-29 00:55:01.686379 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:55:01.686392 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-29 00:55:01.686401 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-29 00:55:01.686414 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-29 00:55:01.686422 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:55:01.686430 | orchestrator | 2025-05-29 00:55:01.686438 | orchestrator | TASK [haproxy-config : Configuring firewall for barbican] ********************** 2025-05-29 00:55:01.686446 | orchestrator | Thursday 29 May 2025 00:49:07 +0000 (0:00:00.762) 0:01:29.592 ********** 2025-05-29 00:55:01.686455 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-05-29 00:55:01.686463 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-05-29 00:55:01.686472 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:55:01.686480 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-05-29 00:55:01.686488 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-05-29 00:55:01.686496 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:55:01.686508 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-05-29 00:55:01.686516 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}})  2025-05-29 00:55:01.686524 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:55:01.686532 | orchestrator | 2025-05-29 00:55:01.686540 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL users config] *********** 2025-05-29 00:55:01.686548 | orchestrator | Thursday 29 May 2025 00:49:08 +0000 (0:00:01.139) 0:01:30.731 ********** 2025-05-29 00:55:01.686556 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:55:01.686625 | orchestrator | changed: [testbed-node-1] 2025-05-29 00:55:01.686634 | orchestrator | changed: [testbed-node-2] 2025-05-29 00:55:01.686642 | orchestrator | 2025-05-29 00:55:01.686651 | orchestrator | TASK [proxysql-config : Copying over barbican ProxySQL rules config] *********** 2025-05-29 00:55:01.686659 | orchestrator | Thursday 29 May 2025 00:49:09 +0000 (0:00:01.306) 0:01:32.038 ********** 2025-05-29 00:55:01.686667 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:55:01.686675 | orchestrator | changed: [testbed-node-1] 2025-05-29 00:55:01.686683 | orchestrator | changed: [testbed-node-2] 2025-05-29 00:55:01.686690 | orchestrator | 2025-05-29 00:55:01.686698 | orchestrator | TASK [include_role : blazar] *************************************************** 2025-05-29 00:55:01.686706 | orchestrator | Thursday 29 May 2025 00:49:13 +0000 (0:00:03.647) 0:01:35.686 ********** 2025-05-29 00:55:01.686714 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:55:01.686722 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:55:01.686737 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:55:01.686745 | orchestrator | 2025-05-29 00:55:01.686774 | orchestrator | TASK [include_role : ceph-rgw] ************************************************* 2025-05-29 00:55:01.686782 | orchestrator | Thursday 29 May 2025 00:49:13 +0000 (0:00:00.291) 0:01:35.977 ********** 2025-05-29 00:55:01.686790 | orchestrator | included: ceph-rgw for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-29 00:55:01.686798 | orchestrator | 2025-05-29 00:55:01.686806 | orchestrator | TASK [haproxy-config : Copying over ceph-rgw haproxy config] ******************* 2025-05-29 00:55:01.686814 | orchestrator | Thursday 29 May 2025 00:49:14 +0000 (0:00:00.921) 0:01:36.899 ********** 2025-05-29 00:55:01.686823 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-05-29 00:55:01.686832 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-05-29 00:55:01.686841 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}}) 2025-05-29 00:55:01.686849 | orchestrator | 2025-05-29 00:55:01.686861 | orchestrator | TASK [haproxy-config : Add configuration for ceph-rgw when using single external frontend] *** 2025-05-29 00:55:01.686869 | orchestrator | Thursday 29 May 2025 00:49:16 +0000 (0:00:02.485) 0:01:39.384 ********** 2025-05-29 00:55:01.686878 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-05-29 00:55:01.686897 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:55:01.686911 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-05-29 00:55:01.686919 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:55:01.686928 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ceph-rgw', 'value': {'group': 'all', 'enabled': True, 'haproxy': {'radosgw': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}, 'radosgw_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}}}})  2025-05-29 00:55:01.686936 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:55:01.686944 | orchestrator | 2025-05-29 00:55:01.686952 | orchestrator | TASK [haproxy-config : Configuring firewall for ceph-rgw] ********************** 2025-05-29 00:55:01.686960 | orchestrator | Thursday 29 May 2025 00:49:18 +0000 (0:00:01.362) 0:01:40.747 ********** 2025-05-29 00:55:01.686969 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-05-29 00:55:01.686980 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-05-29 00:55:01.686988 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:55:01.687000 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-05-29 00:55:01.687009 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-05-29 00:55:01.687022 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:55:01.687030 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-05-29 00:55:01.687043 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'radosgw_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6780', 'custom_member_list': ['server testbed-node-3 192.168.16.13:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-4 192.168.16.14:8081 check inter 2000 rise 2 fall 5', 'server testbed-node-5 192.168.16.15:8081 check inter 2000 rise 2 fall 5']}})  2025-05-29 00:55:01.687051 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:55:01.687059 | orchestrator | 2025-05-29 00:55:01.687067 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL users config] *********** 2025-05-29 00:55:01.687075 | orchestrator | Thursday 29 May 2025 00:49:19 +0000 (0:00:01.728) 0:01:42.476 ********** 2025-05-29 00:55:01.687083 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:55:01.687090 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:55:01.687098 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:55:01.687106 | orchestrator | 2025-05-29 00:55:01.687114 | orchestrator | TASK [proxysql-config : Copying over ceph-rgw ProxySQL rules config] *********** 2025-05-29 00:55:01.687122 | orchestrator | Thursday 29 May 2025 00:49:20 +0000 (0:00:00.650) 0:01:43.126 ********** 2025-05-29 00:55:01.687130 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:55:01.687138 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:55:01.687146 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:55:01.687154 | orchestrator | 2025-05-29 00:55:01.687162 | orchestrator | TASK [include_role : cinder] *************************************************** 2025-05-29 00:55:01.687169 | orchestrator | Thursday 29 May 2025 00:49:21 +0000 (0:00:01.091) 0:01:44.218 ********** 2025-05-29 00:55:01.687177 | orchestrator | included: cinder for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-29 00:55:01.687185 | orchestrator | 2025-05-29 00:55:01.687193 | orchestrator | TASK [haproxy-config : Copying over cinder haproxy config] ********************* 2025-05-29 00:55:01.687200 | orchestrator | Thursday 29 May 2025 00:49:22 +0000 (0:00:00.974) 0:01:45.193 ********** 2025-05-29 00:55:01.687209 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-29 00:55:01.687218 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-29 00:55:01.687235 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-29 00:55:01.687249 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-29 00:55:01.687258 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-29 00:55:01.687266 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-29 00:55:01.687275 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-29 00:55:01.687291 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-29 00:55:01.687304 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-29 00:55:01.687313 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-29 00:55:01.687321 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-29 00:55:01.687329 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-29 00:55:01.687342 | orchestrator | 2025-05-29 00:55:01.687363 | orchestrator | TASK [haproxy-config : Add configuration for cinder when using single external frontend] *** 2025-05-29 00:55:01.687371 | orchestrator | Thursday 29 May 2025 00:49:27 +0000 (0:00:04.477) 0:01:49.670 ********** 2025-05-29 00:55:01.687386 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-29 00:55:01.687395 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-29 00:55:01.687408 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-29 00:55:01.687417 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-29 00:55:01.687425 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:55:01.687434 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-29 00:55:01.687450 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-29 00:55:01.687459 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-29 00:55:01.687472 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-29 00:55:01.687480 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:55:01.687489 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-29 00:55:01.687497 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-29 00:55:01.687514 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-29 00:55:01.687523 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-29 00:55:01.687531 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:55:01.687539 | orchestrator | 2025-05-29 00:55:01.687547 | orchestrator | TASK [haproxy-config : Configuring firewall for cinder] ************************ 2025-05-29 00:55:01.687555 | orchestrator | Thursday 29 May 2025 00:49:28 +0000 (0:00:01.095) 0:01:50.766 ********** 2025-05-29 00:55:01.687564 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-05-29 00:55:01.687576 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-05-29 00:55:01.687585 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:55:01.687593 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-05-29 00:55:01.687601 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-05-29 00:55:01.687610 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:55:01.687618 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-05-29 00:55:01.687626 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}})  2025-05-29 00:55:01.687634 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:55:01.687641 | orchestrator | 2025-05-29 00:55:01.687649 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL users config] ************* 2025-05-29 00:55:01.687663 | orchestrator | Thursday 29 May 2025 00:49:29 +0000 (0:00:01.000) 0:01:51.766 ********** 2025-05-29 00:55:01.687671 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:55:01.687678 | orchestrator | changed: [testbed-node-1] 2025-05-29 00:55:01.687686 | orchestrator | changed: [testbed-node-2] 2025-05-29 00:55:01.687694 | orchestrator | 2025-05-29 00:55:01.687702 | orchestrator | TASK [proxysql-config : Copying over cinder ProxySQL rules config] ************* 2025-05-29 00:55:01.687710 | orchestrator | Thursday 29 May 2025 00:49:30 +0000 (0:00:01.475) 0:01:53.241 ********** 2025-05-29 00:55:01.687718 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:55:01.687726 | orchestrator | changed: [testbed-node-1] 2025-05-29 00:55:01.687733 | orchestrator | changed: [testbed-node-2] 2025-05-29 00:55:01.687741 | orchestrator | 2025-05-29 00:55:01.687749 | orchestrator | TASK [include_role : cloudkitty] *********************************************** 2025-05-29 00:55:01.687757 | orchestrator | Thursday 29 May 2025 00:49:32 +0000 (0:00:02.240) 0:01:55.482 ********** 2025-05-29 00:55:01.687765 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:55:01.687773 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:55:01.687780 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:55:01.687788 | orchestrator | 2025-05-29 00:55:01.687796 | orchestrator | TASK [include_role : cyborg] *************************************************** 2025-05-29 00:55:01.687804 | orchestrator | Thursday 29 May 2025 00:49:33 +0000 (0:00:00.314) 0:01:55.796 ********** 2025-05-29 00:55:01.687812 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:55:01.687819 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:55:01.687827 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:55:01.687835 | orchestrator | 2025-05-29 00:55:01.687843 | orchestrator | TASK [include_role : designate] ************************************************ 2025-05-29 00:55:01.687851 | orchestrator | Thursday 29 May 2025 00:49:33 +0000 (0:00:00.461) 0:01:56.257 ********** 2025-05-29 00:55:01.687858 | orchestrator | included: designate for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-29 00:55:01.687866 | orchestrator | 2025-05-29 00:55:01.687874 | orchestrator | TASK [haproxy-config : Copying over designate haproxy config] ****************** 2025-05-29 00:55:01.687882 | orchestrator | Thursday 29 May 2025 00:49:34 +0000 (0:00:01.065) 0:01:57.322 ********** 2025-05-29 00:55:01.687894 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-29 00:55:01.687908 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-29 00:55:01.687917 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-29 00:55:01.687930 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-29 00:55:01.687939 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-29 00:55:01.687947 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-29 00:55:01.687959 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-29 00:55:01.687972 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-29 00:55:01.687981 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-29 00:55:01.687994 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-29 00:55:01.688003 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-29 00:55:01.688011 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-29 00:55:01.688019 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-29 00:55:01.688027 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-29 00:55:01.688040 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-29 00:55:01.688054 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-29 00:55:01.688062 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-29 00:55:01.688070 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-29 00:55:01.688113 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-29 00:55:01.688130 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-29 00:55:01.688139 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-29 00:55:01.688152 | orchestrator | 2025-05-29 00:55:01.688166 | orchestrator | TASK [haproxy-config : Add configuration for designate when using single external frontend] *** 2025-05-29 00:55:01.688175 | orchestrator | Thursday 29 May 2025 00:49:40 +0000 (0:00:05.286) 0:02:02.609 ********** 2025-05-29 00:55:01.688183 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-29 00:55:01.688192 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-29 00:55:01.688200 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-29 00:55:01.688212 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-29 00:55:01.688220 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-29 00:55:01.688233 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-29 00:55:01.688246 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-29 00:55:01.688254 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:55:01.688262 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-29 00:55:01.688271 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-29 00:55:01.688279 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-29 00:55:01.688291 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-29 00:55:01.688299 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-29 00:55:01.688317 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-29 00:55:01.688325 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-29 00:55:01.688333 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:55:01.688342 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-29 00:55:01.688393 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-29 00:55:01.688406 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-29 00:55:01.688420 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-29 00:55:01.688433 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-29 00:55:01.688442 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-29 00:55:01.688450 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-29 00:55:01.688458 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:55:01.688466 | orchestrator | 2025-05-29 00:55:01.688474 | orchestrator | TASK [haproxy-config : Configuring firewall for designate] ********************* 2025-05-29 00:55:01.688482 | orchestrator | Thursday 29 May 2025 00:49:41 +0000 (0:00:00.966) 0:02:03.576 ********** 2025-05-29 00:55:01.688490 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-05-29 00:55:01.688498 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-05-29 00:55:01.688506 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:55:01.688512 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-05-29 00:55:01.688519 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-05-29 00:55:01.688526 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:55:01.688536 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}})  2025-05-29 00:55:01.688548 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}})  2025-05-29 00:55:01.688555 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:55:01.688562 | orchestrator | 2025-05-29 00:55:01.688568 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL users config] ********** 2025-05-29 00:55:01.688575 | orchestrator | Thursday 29 May 2025 00:49:42 +0000 (0:00:01.036) 0:02:04.612 ********** 2025-05-29 00:55:01.688582 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:55:01.688588 | orchestrator | changed: [testbed-node-1] 2025-05-29 00:55:01.688595 | orchestrator | changed: [testbed-node-2] 2025-05-29 00:55:01.688602 | orchestrator | 2025-05-29 00:55:01.688608 | orchestrator | TASK [proxysql-config : Copying over designate ProxySQL rules config] ********** 2025-05-29 00:55:01.688615 | orchestrator | Thursday 29 May 2025 00:49:43 +0000 (0:00:01.186) 0:02:05.798 ********** 2025-05-29 00:55:01.688622 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:55:01.688628 | orchestrator | changed: [testbed-node-1] 2025-05-29 00:55:01.688635 | orchestrator | changed: [testbed-node-2] 2025-05-29 00:55:01.688642 | orchestrator | 2025-05-29 00:55:01.688648 | orchestrator | TASK [include_role : etcd] ***************************************************** 2025-05-29 00:55:01.688655 | orchestrator | Thursday 29 May 2025 00:49:45 +0000 (0:00:02.151) 0:02:07.950 ********** 2025-05-29 00:55:01.688661 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:55:01.688668 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:55:01.688675 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:55:01.688681 | orchestrator | 2025-05-29 00:55:01.688688 | orchestrator | TASK [include_role : glance] *************************************************** 2025-05-29 00:55:01.688699 | orchestrator | Thursday 29 May 2025 00:49:45 +0000 (0:00:00.476) 0:02:08.426 ********** 2025-05-29 00:55:01.688706 | orchestrator | included: glance for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-29 00:55:01.688713 | orchestrator | 2025-05-29 00:55:01.688719 | orchestrator | TASK [haproxy-config : Copying over glance haproxy config] ********************* 2025-05-29 00:55:01.688726 | orchestrator | Thursday 29 May 2025 00:49:47 +0000 (0:00:01.101) 0:02:09.528 ********** 2025-05-29 00:55:01.688734 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-29 00:55:01.688751 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-29 00:55:01.688990 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-29 00:55:01.689008 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-29 00:55:01.689031 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-29 00:55:01.689044 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-29 00:55:01.689056 | orchestrator | 2025-05-29 00:55:01.689064 | orchestrator | TASK [haproxy-config : Add configuration for glance when using single external frontend] *** 2025-05-29 00:55:01.689070 | orchestrator | Thursday 29 May 2025 00:49:52 +0000 (0:00:05.437) 0:02:14.966 ********** 2025-05-29 00:55:01.689082 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-05-29 00:55:01.689090 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-29 00:55:01.689103 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:55:01.689118 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-05-29 00:55:01.689127 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-29 00:55:01.689139 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:55:01.689150 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-05-29 00:55:01.689162 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-29 00:55:01.689175 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:55:01.689182 | orchestrator | 2025-05-29 00:55:01.689188 | orchestrator | TASK [haproxy-config : Configuring firewall for glance] ************************ 2025-05-29 00:55:01.689195 | orchestrator | Thursday 29 May 2025 00:49:56 +0000 (0:00:04.522) 0:02:19.488 ********** 2025-05-29 00:55:01.689203 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-05-29 00:55:01.689213 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-05-29 00:55:01.689221 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:55:01.689228 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-05-29 00:55:01.689239 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-05-29 00:55:01.689246 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:55:01.689253 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-05-29 00:55:01.689260 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}})  2025-05-29 00:55:01.689268 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:55:01.689275 | orchestrator | 2025-05-29 00:55:01.689285 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL users config] ************* 2025-05-29 00:55:01.689292 | orchestrator | Thursday 29 May 2025 00:50:01 +0000 (0:00:04.716) 0:02:24.205 ********** 2025-05-29 00:55:01.689299 | orchestrator | changed: [testbed-node-1] 2025-05-29 00:55:01.689306 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:55:01.689313 | orchestrator | changed: [testbed-node-2] 2025-05-29 00:55:01.689319 | orchestrator | 2025-05-29 00:55:01.689326 | orchestrator | TASK [proxysql-config : Copying over glance ProxySQL rules config] ************* 2025-05-29 00:55:01.689333 | orchestrator | Thursday 29 May 2025 00:50:02 +0000 (0:00:01.082) 0:02:25.288 ********** 2025-05-29 00:55:01.689340 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:55:01.689361 | orchestrator | changed: [testbed-node-1] 2025-05-29 00:55:01.689368 | orchestrator | changed: [testbed-node-2] 2025-05-29 00:55:01.689374 | orchestrator | 2025-05-29 00:55:01.689381 | orchestrator | TASK [include_role : gnocchi] ************************************************** 2025-05-29 00:55:01.689388 | orchestrator | Thursday 29 May 2025 00:50:04 +0000 (0:00:01.998) 0:02:27.286 ********** 2025-05-29 00:55:01.689395 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:55:01.689401 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:55:01.689408 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:55:01.689415 | orchestrator | 2025-05-29 00:55:01.689422 | orchestrator | TASK [include_role : grafana] ************************************************** 2025-05-29 00:55:01.689428 | orchestrator | Thursday 29 May 2025 00:50:05 +0000 (0:00:00.465) 0:02:27.751 ********** 2025-05-29 00:55:01.689435 | orchestrator | included: grafana for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-29 00:55:01.689442 | orchestrator | 2025-05-29 00:55:01.689448 | orchestrator | TASK [haproxy-config : Copying over grafana haproxy config] ******************** 2025-05-29 00:55:01.689455 | orchestrator | Thursday 29 May 2025 00:50:06 +0000 (0:00:01.035) 0:02:28.786 ********** 2025-05-29 00:55:01.689466 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-29 00:55:01.689474 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-29 00:55:01.689486 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-29 00:55:01.689498 | orchestrator | 2025-05-29 00:55:01.689505 | orchestrator | TASK [haproxy-config : Add configuration for grafana when using single external frontend] *** 2025-05-29 00:55:01.689512 | orchestrator | Thursday 29 May 2025 00:50:10 +0000 (0:00:04.164) 0:02:32.951 ********** 2025-05-29 00:55:01.689520 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-05-29 00:55:01.689527 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:55:01.689534 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-05-29 00:55:01.689541 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:55:01.689548 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-05-29 00:55:01.689555 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:55:01.689562 | orchestrator | 2025-05-29 00:55:01.689569 | orchestrator | TASK [haproxy-config : Configuring firewall for grafana] *********************** 2025-05-29 00:55:01.689575 | orchestrator | Thursday 29 May 2025 00:50:11 +0000 (0:00:00.805) 0:02:33.757 ********** 2025-05-29 00:55:01.689586 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-05-29 00:55:01.689593 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-05-29 00:55:01.689600 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-05-29 00:55:01.689607 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-05-29 00:55:01.689614 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:55:01.689621 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:55:01.689629 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}})  2025-05-29 00:55:01.689646 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}})  2025-05-29 00:55:01.689654 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:55:01.689662 | orchestrator | 2025-05-29 00:55:01.689670 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL users config] ************ 2025-05-29 00:55:01.689677 | orchestrator | Thursday 29 May 2025 00:50:12 +0000 (0:00:01.005) 0:02:34.763 ********** 2025-05-29 00:55:01.689685 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:55:01.689693 | orchestrator | changed: [testbed-node-1] 2025-05-29 00:55:01.689700 | orchestrator | changed: [testbed-node-2] 2025-05-29 00:55:01.689708 | orchestrator | 2025-05-29 00:55:01.689715 | orchestrator | TASK [proxysql-config : Copying over grafana ProxySQL rules config] ************ 2025-05-29 00:55:01.689723 | orchestrator | Thursday 29 May 2025 00:50:13 +0000 (0:00:01.253) 0:02:36.017 ********** 2025-05-29 00:55:01.689731 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:55:01.689738 | orchestrator | changed: [testbed-node-1] 2025-05-29 00:55:01.689746 | orchestrator | changed: [testbed-node-2] 2025-05-29 00:55:01.689761 | orchestrator | 2025-05-29 00:55:01.689767 | orchestrator | TASK [include_role : heat] ***************************************************** 2025-05-29 00:55:01.689774 | orchestrator | Thursday 29 May 2025 00:50:15 +0000 (0:00:02.400) 0:02:38.418 ********** 2025-05-29 00:55:01.689781 | orchestrator | included: heat for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-29 00:55:01.689788 | orchestrator | 2025-05-29 00:55:01.689795 | orchestrator | TASK [haproxy-config : Copying over heat haproxy config] *********************** 2025-05-29 00:55:01.689801 | orchestrator | Thursday 29 May 2025 00:50:17 +0000 (0:00:01.198) 0:02:39.617 ********** 2025-05-29 00:55:01.689809 | orchestrator | changed: [testbed-node-0] => (item={'key': 'heat-api', 'value': {'container_name': 'heat_api', 'group': 'heat-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8004'], 'timeout': '30'}, 'haproxy': {'heat_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}, 'heat_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}}}}) 2025-05-29 00:55:01.689816 | orchestrator | changed: [testbed-node-1] => (item={'key': 'heat-api', 'value': {'container_name': 'heat_api', 'group': 'heat-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8004'], 'timeout': '30'}, 'haproxy': {'heat_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}, 'heat_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}}}}) 2025-05-29 00:55:01.689827 | orchestrator | changed: [testbed-node-2] => (item={'key': 'heat-api', 'value': {'container_name': 'heat_api', 'group': 'heat-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8004'], 'timeout': '30'}, 'haproxy': {'heat_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}, 'heat_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}}}}) 2025-05-29 00:55:01.689847 | orchestrator | changed: [testbed-node-0] => (item={'key': 'heat-api-cfn', 'value': {'container_name': 'heat_api_cfn', 'group': 'heat-api-cfn', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api-cfn:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api-cfn/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8000'], 'timeout': '30'}, 'haproxy': {'heat_api_cfn': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}, 'heat_api_cfn_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}}}}) 2025-05-29 00:55:01.689854 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat-engine', 'value': {'container_name': 'heat_engine', 'group': 'heat-engine', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-engine:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-engine/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port heat-engine 5672'], 'timeout': '30'}}})  2025-05-29 00:55:01.689862 | orchestrator | changed: [testbed-node-1] => (item={'key': 'heat-api-cfn', 'value': {'container_name': 'heat_api_cfn', 'group': 'heat-api-cfn', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api-cfn:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api-cfn/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8000'], 'timeout': '30'}, 'haproxy': {'heat_api_cfn': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}, 'heat_api_cfn_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}}}}) 2025-05-29 00:55:01.689869 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat-engine', 'value': {'container_name': 'heat_engine', 'group': 'heat-engine', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-engine:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-engine/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port heat-engine 5672'], 'timeout': '30'}}})  2025-05-29 00:55:01.689879 | orchestrator | changed: [testbed-node-2] => (item={'key': 'heat-api-cfn', 'value': {'container_name': 'heat_api_cfn', 'group': 'heat-api-cfn', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api-cfn:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api-cfn/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8000'], 'timeout': '30'}, 'haproxy': {'heat_api_cfn': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}, 'heat_api_cfn_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}}}}) 2025-05-29 00:55:01.689892 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat-engine', 'value': {'container_name': 'heat_engine', 'group': 'heat-engine', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-engine:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-engine/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port heat-engine 5672'], 'timeout': '30'}}})  2025-05-29 00:55:01.689899 | orchestrator | 2025-05-29 00:55:01.689909 | orchestrator | TASK [haproxy-config : Add configuration for heat when using single external frontend] *** 2025-05-29 00:55:01.689916 | orchestrator | Thursday 29 May 2025 00:50:24 +0000 (0:00:07.201) 0:02:46.818 ********** 2025-05-29 00:55:01.689923 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat-api', 'value': {'container_name': 'heat_api', 'group': 'heat-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8004'], 'timeout': '30'}, 'haproxy': {'heat_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}, 'heat_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}}}})  2025-05-29 00:55:01.689930 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat-api-cfn', 'value': {'container_name': 'heat_api_cfn', 'group': 'heat-api-cfn', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api-cfn:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api-cfn/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8000'], 'timeout': '30'}, 'haproxy': {'heat_api_cfn': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}, 'heat_api_cfn_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}}}})  2025-05-29 00:55:01.689937 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat-engine', 'value': {'container_name': 'heat_engine', 'group': 'heat-engine', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-engine:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-engine/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port heat-engine 5672'], 'timeout': '30'}}})  2025-05-29 00:55:01.689944 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:55:01.689955 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat-api', 'value': {'container_name': 'heat_api', 'group': 'heat-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8004'], 'timeout': '30'}, 'haproxy': {'heat_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}, 'heat_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}}}})  2025-05-29 00:55:01.689970 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat-api-cfn', 'value': {'container_name': 'heat_api_cfn', 'group': 'heat-api-cfn', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api-cfn:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api-cfn/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8000'], 'timeout': '30'}, 'haproxy': {'heat_api_cfn': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}, 'heat_api_cfn_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}}}})  2025-05-29 00:55:01.689977 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat-engine', 'value': {'container_name': 'heat_engine', 'group': 'heat-engine', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-engine:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-engine/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port heat-engine 5672'], 'timeout': '30'}}})  2025-05-29 00:55:01.689985 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:55:01.689992 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat-api', 'value': {'container_name': 'heat_api', 'group': 'heat-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8004'], 'timeout': '30'}, 'haproxy': {'heat_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}, 'heat_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}}}})  2025-05-29 00:55:01.689999 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat-api-cfn', 'value': {'container_name': 'heat_api_cfn', 'group': 'heat-api-cfn', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-api-cfn:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-api-cfn/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8000'], 'timeout': '30'}, 'haproxy': {'heat_api_cfn': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}, 'heat_api_cfn_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}}}})  2025-05-29 00:55:01.690009 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat-engine', 'value': {'container_name': 'heat_engine', 'group': 'heat-engine', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/heat-engine:22.0.2.20241206', 'volumes': ['/etc/kolla/heat-engine/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port heat-engine 5672'], 'timeout': '30'}}})  2025-05-29 00:55:01.690056 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:55:01.690066 | orchestrator | 2025-05-29 00:55:01.690073 | orchestrator | TASK [haproxy-config : Configuring firewall for heat] ************************** 2025-05-29 00:55:01.690080 | orchestrator | Thursday 29 May 2025 00:50:25 +0000 (0:00:00.880) 0:02:47.698 ********** 2025-05-29 00:55:01.690086 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}})  2025-05-29 00:55:01.690096 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}})  2025-05-29 00:55:01.690103 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat_api_cfn', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}})  2025-05-29 00:55:01.690113 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'heat_api_cfn_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}})  2025-05-29 00:55:01.690122 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:55:01.690128 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}})  2025-05-29 00:55:01.690135 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}})  2025-05-29 00:55:01.690142 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat_api_cfn', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}})  2025-05-29 00:55:01.690149 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'heat_api_cfn_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}})  2025-05-29 00:55:01.690156 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:55:01.690163 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}})  2025-05-29 00:55:01.690170 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8004', 'listen_port': '8004', 'tls_backend': 'no'}})  2025-05-29 00:55:01.690177 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat_api_cfn', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}})  2025-05-29 00:55:01.690184 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'heat_api_cfn_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8000', 'listen_port': '8000', 'tls_backend': 'no'}})  2025-05-29 00:55:01.690191 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:55:01.690198 | orchestrator | 2025-05-29 00:55:01.690209 | orchestrator | TASK [proxysql-config : Copying over heat ProxySQL users config] *************** 2025-05-29 00:55:01.690216 | orchestrator | Thursday 29 May 2025 00:50:26 +0000 (0:00:01.481) 0:02:49.179 ********** 2025-05-29 00:55:01.690223 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:55:01.690230 | orchestrator | changed: [testbed-node-1] 2025-05-29 00:55:01.690237 | orchestrator | changed: [testbed-node-2] 2025-05-29 00:55:01.690243 | orchestrator | 2025-05-29 00:55:01.690250 | orchestrator | TASK [proxysql-config : Copying over heat ProxySQL rules config] *************** 2025-05-29 00:55:01.690257 | orchestrator | Thursday 29 May 2025 00:50:27 +0000 (0:00:01.260) 0:02:50.440 ********** 2025-05-29 00:55:01.690264 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:55:01.690270 | orchestrator | changed: [testbed-node-1] 2025-05-29 00:55:01.690277 | orchestrator | changed: [testbed-node-2] 2025-05-29 00:55:01.690283 | orchestrator | 2025-05-29 00:55:01.690290 | orchestrator | TASK [include_role : horizon] ************************************************** 2025-05-29 00:55:01.690297 | orchestrator | Thursday 29 May 2025 00:50:30 +0000 (0:00:02.245) 0:02:52.686 ********** 2025-05-29 00:55:01.690303 | orchestrator | included: horizon for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-29 00:55:01.690310 | orchestrator | 2025-05-29 00:55:01.690320 | orchestrator | TASK [haproxy-config : Copying over horizon haproxy config] ******************** 2025-05-29 00:55:01.690327 | orchestrator | Thursday 29 May 2025 00:50:31 +0000 (0:00:01.052) 0:02:53.739 ********** 2025-05-29 00:55:01.690359 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-29 00:55:01.690373 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-29 00:55:01.690398 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-29 00:55:01.690411 | orchestrator | 2025-05-29 00:55:01.690427 | orchestrator | TASK [haproxy-config : Add configuration for horizon when using single external frontend] *** 2025-05-29 00:55:01.690451 | orchestrator | Thursday 29 May 2025 00:50:34 +0000 (0:00:03.531) 0:02:57.270 ********** 2025-05-29 00:55:01.690468 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-05-29 00:55:01.690486 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:55:01.690505 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-05-29 00:55:01.690526 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:55:01.690539 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-05-29 00:55:01.690547 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:55:01.690554 | orchestrator | 2025-05-29 00:55:01.690564 | orchestrator | TASK [haproxy-config : Configuring firewall for horizon] *********************** 2025-05-29 00:55:01.690571 | orchestrator | Thursday 29 May 2025 00:50:35 +0000 (0:00:00.910) 0:02:58.181 ********** 2025-05-29 00:55:01.690578 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-05-29 00:55:01.690586 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-05-29 00:55:01.690596 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-05-29 00:55:01.690610 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-05-29 00:55:01.690618 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-05-29 00:55:01.690625 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:55:01.690632 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-05-29 00:55:01.690639 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-05-29 00:55:01.690646 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-05-29 00:55:01.690653 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-05-29 00:55:01.690660 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-05-29 00:55:01.690667 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:55:01.690707 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-05-29 00:55:01.690721 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-05-29 00:55:01.690732 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}})  2025-05-29 00:55:01.690740 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon_external_redirect', 'value': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}})  2025-05-29 00:55:01.690747 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'acme_client', 'value': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}})  2025-05-29 00:55:01.690758 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:55:01.690765 | orchestrator | 2025-05-29 00:55:01.690772 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL users config] ************ 2025-05-29 00:55:01.690779 | orchestrator | Thursday 29 May 2025 00:50:36 +0000 (0:00:01.089) 0:02:59.270 ********** 2025-05-29 00:55:01.690785 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:55:01.690792 | orchestrator | changed: [testbed-node-1] 2025-05-29 00:55:01.690798 | orchestrator | changed: [testbed-node-2] 2025-05-29 00:55:01.690805 | orchestrator | 2025-05-29 00:55:01.690812 | orchestrator | TASK [proxysql-config : Copying over horizon ProxySQL rules config] ************ 2025-05-29 00:55:01.690818 | orchestrator | Thursday 29 May 2025 00:50:38 +0000 (0:00:01.263) 0:03:00.534 ********** 2025-05-29 00:55:01.690825 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:55:01.690832 | orchestrator | changed: [testbed-node-1] 2025-05-29 00:55:01.690838 | orchestrator | changed: [testbed-node-2] 2025-05-29 00:55:01.690845 | orchestrator | 2025-05-29 00:55:01.690852 | orchestrator | TASK [include_role : influxdb] ************************************************* 2025-05-29 00:55:01.690859 | orchestrator | Thursday 29 May 2025 00:50:40 +0000 (0:00:02.350) 0:03:02.884 ********** 2025-05-29 00:55:01.690865 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:55:01.690872 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:55:01.690879 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:55:01.690885 | orchestrator | 2025-05-29 00:55:01.690892 | orchestrator | TASK [include_role : ironic] *************************************************** 2025-05-29 00:55:01.690898 | orchestrator | Thursday 29 May 2025 00:50:40 +0000 (0:00:00.619) 0:03:03.503 ********** 2025-05-29 00:55:01.690905 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:55:01.690912 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:55:01.690918 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:55:01.690925 | orchestrator | 2025-05-29 00:55:01.690931 | orchestrator | TASK [include_role : keystone] ************************************************* 2025-05-29 00:55:01.690938 | orchestrator | Thursday 29 May 2025 00:50:41 +0000 (0:00:00.393) 0:03:03.896 ********** 2025-05-29 00:55:01.690945 | orchestrator | included: keystone for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-29 00:55:01.690951 | orchestrator | 2025-05-29 00:55:01.690958 | orchestrator | TASK [haproxy-config : Copying over keystone haproxy config] ******************* 2025-05-29 00:55:01.690970 | orchestrator | Thursday 29 May 2025 00:50:42 +0000 (0:00:01.326) 0:03:05.223 ********** 2025-05-29 00:55:01.690982 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-29 00:55:01.690991 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-29 00:55:01.691007 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-29 00:55:01.691015 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-29 00:55:01.691023 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-29 00:55:01.691030 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-29 00:55:01.691040 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-29 00:55:01.691056 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-29 00:55:01.691063 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-29 00:55:01.691070 | orchestrator | 2025-05-29 00:55:01.691077 | orchestrator | TASK [haproxy-config : Add configuration for keystone when using single external frontend] *** 2025-05-29 00:55:01.691084 | orchestrator | Thursday 29 May 2025 00:50:47 +0000 (0:00:04.642) 0:03:09.865 ********** 2025-05-29 00:55:01.691091 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-05-29 00:55:01.691098 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-29 00:55:01.691109 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-29 00:55:01.691116 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:55:01.691139 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-05-29 00:55:01.691147 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-29 00:55:01.691154 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-29 00:55:01.691161 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:55:01.691168 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-05-29 00:55:01.691179 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-29 00:55:01.691190 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-29 00:55:01.691197 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:55:01.691204 | orchestrator | 2025-05-29 00:55:01.691211 | orchestrator | TASK [haproxy-config : Configuring firewall for keystone] ********************** 2025-05-29 00:55:01.691222 | orchestrator | Thursday 29 May 2025 00:50:48 +0000 (0:00:00.793) 0:03:10.659 ********** 2025-05-29 00:55:01.691239 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}})  2025-05-29 00:55:01.691246 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}})  2025-05-29 00:55:01.691253 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:55:01.691260 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}})  2025-05-29 00:55:01.691267 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}})  2025-05-29 00:55:01.691274 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:55:01.691281 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_internal', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}})  2025-05-29 00:55:01.691289 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}})  2025-05-29 00:55:01.691295 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:55:01.691302 | orchestrator | 2025-05-29 00:55:01.691309 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL users config] *********** 2025-05-29 00:55:01.691316 | orchestrator | Thursday 29 May 2025 00:50:49 +0000 (0:00:01.183) 0:03:11.843 ********** 2025-05-29 00:55:01.691323 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:55:01.691329 | orchestrator | changed: [testbed-node-1] 2025-05-29 00:55:01.691336 | orchestrator | changed: [testbed-node-2] 2025-05-29 00:55:01.691385 | orchestrator | 2025-05-29 00:55:01.691394 | orchestrator | TASK [proxysql-config : Copying over keystone ProxySQL rules config] *********** 2025-05-29 00:55:01.691401 | orchestrator | Thursday 29 May 2025 00:50:50 +0000 (0:00:01.359) 0:03:13.202 ********** 2025-05-29 00:55:01.691407 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:55:01.691414 | orchestrator | changed: [testbed-node-1] 2025-05-29 00:55:01.691421 | orchestrator | changed: [testbed-node-2] 2025-05-29 00:55:01.691427 | orchestrator | 2025-05-29 00:55:01.691434 | orchestrator | TASK [include_role : letsencrypt] ********************************************** 2025-05-29 00:55:01.691446 | orchestrator | Thursday 29 May 2025 00:50:53 +0000 (0:00:02.360) 0:03:15.563 ********** 2025-05-29 00:55:01.691452 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:55:01.691459 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:55:01.691465 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:55:01.691472 | orchestrator | 2025-05-29 00:55:01.691479 | orchestrator | TASK [include_role : magnum] *************************************************** 2025-05-29 00:55:01.691486 | orchestrator | Thursday 29 May 2025 00:50:53 +0000 (0:00:00.305) 0:03:15.869 ********** 2025-05-29 00:55:01.691492 | orchestrator | included: magnum for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-29 00:55:01.691499 | orchestrator | 2025-05-29 00:55:01.691505 | orchestrator | TASK [haproxy-config : Copying over magnum haproxy config] ********************* 2025-05-29 00:55:01.691516 | orchestrator | Thursday 29 May 2025 00:50:54 +0000 (0:00:01.338) 0:03:17.207 ********** 2025-05-29 00:55:01.691524 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-29 00:55:01.691536 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-29 00:55:01.691544 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-29 00:55:01.691552 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-29 00:55:01.691566 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-29 00:55:01.691574 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-29 00:55:01.691581 | orchestrator | 2025-05-29 00:55:01.691588 | orchestrator | TASK [haproxy-config : Add configuration for magnum when using single external frontend] *** 2025-05-29 00:55:01.691594 | orchestrator | Thursday 29 May 2025 00:50:59 +0000 (0:00:04.954) 0:03:22.162 ********** 2025-05-29 00:55:01.691606 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-29 00:55:01.691613 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-29 00:55:01.691621 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:55:01.691628 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-29 00:55:01.691645 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-29 00:55:01.691656 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-29 00:55:01.691664 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:55:01.691671 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-29 00:55:01.691678 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:55:01.691684 | orchestrator | 2025-05-29 00:55:01.691696 | orchestrator | TASK [haproxy-config : Configuring firewall for magnum] ************************ 2025-05-29 00:55:01.691703 | orchestrator | Thursday 29 May 2025 00:51:00 +0000 (0:00:00.934) 0:03:23.097 ********** 2025-05-29 00:55:01.691710 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-05-29 00:55:01.691717 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-05-29 00:55:01.691729 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:55:01.691735 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-05-29 00:55:01.691742 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-05-29 00:55:01.691749 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:55:01.691756 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}})  2025-05-29 00:55:01.691763 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}})  2025-05-29 00:55:01.691770 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:55:01.691777 | orchestrator | 2025-05-29 00:55:01.691783 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL users config] ************* 2025-05-29 00:55:01.691790 | orchestrator | Thursday 29 May 2025 00:51:02 +0000 (0:00:01.497) 0:03:24.594 ********** 2025-05-29 00:55:01.691797 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:55:01.691803 | orchestrator | changed: [testbed-node-1] 2025-05-29 00:55:01.691810 | orchestrator | changed: [testbed-node-2] 2025-05-29 00:55:01.691817 | orchestrator | 2025-05-29 00:55:01.691824 | orchestrator | TASK [proxysql-config : Copying over magnum ProxySQL rules config] ************* 2025-05-29 00:55:01.691830 | orchestrator | Thursday 29 May 2025 00:51:03 +0000 (0:00:01.434) 0:03:26.029 ********** 2025-05-29 00:55:01.691837 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:55:01.691844 | orchestrator | changed: [testbed-node-1] 2025-05-29 00:55:01.691854 | orchestrator | changed: [testbed-node-2] 2025-05-29 00:55:01.691860 | orchestrator | 2025-05-29 00:55:01.691867 | orchestrator | TASK [include_role : manila] *************************************************** 2025-05-29 00:55:01.691874 | orchestrator | Thursday 29 May 2025 00:51:05 +0000 (0:00:02.366) 0:03:28.395 ********** 2025-05-29 00:55:01.691880 | orchestrator | included: manila for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-29 00:55:01.691887 | orchestrator | 2025-05-29 00:55:01.691893 | orchestrator | TASK [haproxy-config : Copying over manila haproxy config] ********************* 2025-05-29 00:55:01.691900 | orchestrator | Thursday 29 May 2025 00:51:07 +0000 (0:00:01.188) 0:03:29.584 ********** 2025-05-29 00:55:01.691916 | orchestrator | changed: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-05-29 00:55:01.691924 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-05-29 00:55:01.691934 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-05-29 00:55:01.691941 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-05-29 00:55:01.691948 | orchestrator | changed: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-05-29 00:55:01.691958 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-05-29 00:55:01.691964 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-05-29 00:55:01.691975 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-05-29 00:55:01.691986 | orchestrator | changed: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}}) 2025-05-29 00:55:01.691993 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-05-29 00:55:01.691999 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-05-29 00:55:01.692009 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-05-29 00:55:01.692015 | orchestrator | 2025-05-29 00:55:01.692022 | orchestrator | TASK [haproxy-config : Add configuration for manila when using single external frontend] *** 2025-05-29 00:55:01.692028 | orchestrator | Thursday 29 May 2025 00:51:11 +0000 (0:00:04.197) 0:03:33.782 ********** 2025-05-29 00:55:01.692038 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-05-29 00:55:01.692049 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-05-29 00:55:01.692055 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-05-29 00:55:01.692062 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-05-29 00:55:01.692071 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-05-29 00:55:01.692078 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:55:01.692085 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-05-29 00:55:01.692095 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-05-29 00:55:01.692106 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-05-29 00:55:01.692112 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:55:01.692119 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-api', 'value': {'container_name': 'manila_api', 'group': 'manila-api', 'image': 'registry.osism.tech/kolla/release/manila-api:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8786'], 'timeout': '30'}, 'haproxy': {'manila_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}, 'manila_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}}}})  2025-05-29 00:55:01.692125 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-scheduler', 'value': {'container_name': 'manila_scheduler', 'group': 'manila-scheduler', 'image': 'registry.osism.tech/kolla/release/manila-scheduler:18.2.2.20241206', 'enabled': True, 'volumes': ['/etc/kolla/manila-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-scheduler 5672'], 'timeout': '30'}}})  2025-05-29 00:55:01.692135 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-share', 'value': {'container_name': 'manila_share', 'group': 'manila-share', 'image': 'registry.osism.tech/kolla/release/manila-share:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-share/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', '/lib/modules:/lib/modules:ro', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-share 5672'], 'timeout': '30'}}})  2025-05-29 00:55:01.692141 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila-data', 'value': {'container_name': 'manila_data', 'group': 'manila-data', 'image': 'registry.osism.tech/kolla/release/manila-data:18.2.2.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/manila-data/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/run:/run:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port manila-data 5672'], 'timeout': '30'}}})  2025-05-29 00:55:01.692148 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:55:01.692154 | orchestrator | 2025-05-29 00:55:01.692160 | orchestrator | TASK [haproxy-config : Configuring firewall for manila] ************************ 2025-05-29 00:55:01.692171 | orchestrator | Thursday 29 May 2025 00:51:12 +0000 (0:00:00.969) 0:03:34.751 ********** 2025-05-29 00:55:01.692177 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-05-29 00:55:01.692188 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-05-29 00:55:01.692194 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:55:01.692201 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-05-29 00:55:01.692207 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-05-29 00:55:01.692213 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:55:01.692219 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8786', 'listen_port': '8786'}})  2025-05-29 00:55:01.692226 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'manila_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8786', 'listen_port': '8786'}})  2025-05-29 00:55:01.692232 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:55:01.692238 | orchestrator | 2025-05-29 00:55:01.692245 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL users config] ************* 2025-05-29 00:55:01.692251 | orchestrator | Thursday 29 May 2025 00:51:13 +0000 (0:00:01.129) 0:03:35.880 ********** 2025-05-29 00:55:01.692257 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:55:01.692263 | orchestrator | changed: [testbed-node-1] 2025-05-29 00:55:01.692274 | orchestrator | changed: [testbed-node-2] 2025-05-29 00:55:01.692281 | orchestrator | 2025-05-29 00:55:01.692287 | orchestrator | TASK [proxysql-config : Copying over manila ProxySQL rules config] ************* 2025-05-29 00:55:01.692293 | orchestrator | Thursday 29 May 2025 00:51:14 +0000 (0:00:01.287) 0:03:37.167 ********** 2025-05-29 00:55:01.692299 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:55:01.692305 | orchestrator | changed: [testbed-node-1] 2025-05-29 00:55:01.692311 | orchestrator | changed: [testbed-node-2] 2025-05-29 00:55:01.692318 | orchestrator | 2025-05-29 00:55:01.692324 | orchestrator | TASK [include_role : mariadb] ************************************************** 2025-05-29 00:55:01.692330 | orchestrator | Thursday 29 May 2025 00:51:17 +0000 (0:00:02.397) 0:03:39.565 ********** 2025-05-29 00:55:01.692336 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-29 00:55:01.692342 | orchestrator | 2025-05-29 00:55:01.692363 | orchestrator | TASK [mariadb : Ensure mysql monitor user exist] ******************************* 2025-05-29 00:55:01.692369 | orchestrator | Thursday 29 May 2025 00:51:18 +0000 (0:00:01.437) 0:03:41.003 ********** 2025-05-29 00:55:01.692375 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-29 00:55:01.692382 | orchestrator | 2025-05-29 00:55:01.692388 | orchestrator | TASK [haproxy-config : Copying over mariadb haproxy config] ******************** 2025-05-29 00:55:01.692394 | orchestrator | Thursday 29 May 2025 00:51:21 +0000 (0:00:03.218) 0:03:44.221 ********** 2025-05-29 00:55:01.692404 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-05-29 00:55:01.692420 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-05-29 00:55:01.692427 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:55:01.692434 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-05-29 00:55:01.692450 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-05-29 00:55:01.692461 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:55:01.692472 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-05-29 00:55:01.692479 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-05-29 00:55:01.692486 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:55:01.692492 | orchestrator | 2025-05-29 00:55:01.692498 | orchestrator | TASK [haproxy-config : Add configuration for mariadb when using single external frontend] *** 2025-05-29 00:55:01.692505 | orchestrator | Thursday 29 May 2025 00:51:25 +0000 (0:00:03.650) 0:03:47.872 ********** 2025-05-29 00:55:01.692515 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-05-29 00:55:01.692534 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-05-29 00:55:01.692541 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:55:01.692547 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-05-29 00:55:01.692554 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-05-29 00:55:01.692565 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:55:01.692579 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}})  2025-05-29 00:55:01.692587 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}})  2025-05-29 00:55:01.692594 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:55:01.692600 | orchestrator | 2025-05-29 00:55:01.692606 | orchestrator | TASK [haproxy-config : Configuring firewall for mariadb] *********************** 2025-05-29 00:55:01.692612 | orchestrator | Thursday 29 May 2025 00:51:28 +0000 (0:00:02.921) 0:03:50.793 ********** 2025-05-29 00:55:01.692619 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}})  2025-05-29 00:55:01.692626 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}})  2025-05-29 00:55:01.692636 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:55:01.692646 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}})  2025-05-29 00:55:01.692652 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}})  2025-05-29 00:55:01.692659 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:55:01.692669 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}})  2025-05-29 00:55:01.692676 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'mariadb_external_lb', 'value': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}})  2025-05-29 00:55:01.692683 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:55:01.692689 | orchestrator | 2025-05-29 00:55:01.692696 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL users config] ************ 2025-05-29 00:55:01.692702 | orchestrator | Thursday 29 May 2025 00:51:31 +0000 (0:00:03.515) 0:03:54.308 ********** 2025-05-29 00:55:01.692708 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:55:01.692714 | orchestrator | changed: [testbed-node-1] 2025-05-29 00:55:01.692720 | orchestrator | changed: [testbed-node-2] 2025-05-29 00:55:01.692727 | orchestrator | 2025-05-29 00:55:01.692733 | orchestrator | TASK [proxysql-config : Copying over mariadb ProxySQL rules config] ************ 2025-05-29 00:55:01.692739 | orchestrator | Thursday 29 May 2025 00:51:33 +0000 (0:00:02.150) 0:03:56.459 ********** 2025-05-29 00:55:01.692745 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:55:01.692751 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:55:01.692758 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:55:01.692764 | orchestrator | 2025-05-29 00:55:01.692770 | orchestrator | TASK [include_role : masakari] ************************************************* 2025-05-29 00:55:01.692783 | orchestrator | Thursday 29 May 2025 00:51:35 +0000 (0:00:01.816) 0:03:58.275 ********** 2025-05-29 00:55:01.692789 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:55:01.692795 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:55:01.692801 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:55:01.692812 | orchestrator | 2025-05-29 00:55:01.692818 | orchestrator | TASK [include_role : memcached] ************************************************ 2025-05-29 00:55:01.692824 | orchestrator | Thursday 29 May 2025 00:51:36 +0000 (0:00:00.527) 0:03:58.803 ********** 2025-05-29 00:55:01.692830 | orchestrator | included: memcached for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-29 00:55:01.692836 | orchestrator | 2025-05-29 00:55:01.692842 | orchestrator | TASK [haproxy-config : Copying over memcached haproxy config] ****************** 2025-05-29 00:55:01.692849 | orchestrator | Thursday 29 May 2025 00:51:37 +0000 (0:00:01.451) 0:04:00.255 ********** 2025-05-29 00:55:01.692855 | orchestrator | changed: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.14.20241206', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-05-29 00:55:01.692865 | orchestrator | changed: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.14.20241206', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-05-29 00:55:01.692877 | orchestrator | changed: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.14.20241206', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}}) 2025-05-29 00:55:01.692883 | orchestrator | 2025-05-29 00:55:01.692890 | orchestrator | TASK [haproxy-config : Add configuration for memcached when using single external frontend] *** 2025-05-29 00:55:01.692896 | orchestrator | Thursday 29 May 2025 00:51:39 +0000 (0:00:01.883) 0:04:02.139 ********** 2025-05-29 00:55:01.692902 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.14.20241206', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-05-29 00:55:01.692915 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:55:01.692921 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.14.20241206', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-05-29 00:55:01.692928 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:55:01.692934 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'container_name': 'memcached', 'image': 'registry.osism.tech/kolla/release/memcached:1.6.14.20241206', 'enabled': True, 'group': 'memcached', 'volumes': ['/etc/kolla/memcached/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen memcached 11211'], 'timeout': '30'}, 'haproxy': {'memcached': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}}}})  2025-05-29 00:55:01.692941 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:55:01.692947 | orchestrator | 2025-05-29 00:55:01.692956 | orchestrator | TASK [haproxy-config : Configuring firewall for memcached] ********************* 2025-05-29 00:55:01.692962 | orchestrator | Thursday 29 May 2025 00:51:39 +0000 (0:00:00.377) 0:04:02.516 ********** 2025-05-29 00:55:01.692969 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-05-29 00:55:01.692975 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:55:01.692982 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-05-29 00:55:01.692988 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:55:01.692995 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'memcached', 'value': {'enabled': False, 'mode': 'tcp', 'port': '11211', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'active_passive': True}})  2025-05-29 00:55:01.693001 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:55:01.693008 | orchestrator | 2025-05-29 00:55:01.693018 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL users config] ********** 2025-05-29 00:55:01.693024 | orchestrator | Thursday 29 May 2025 00:51:41 +0000 (0:00:01.062) 0:04:03.578 ********** 2025-05-29 00:55:01.693030 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:55:01.693037 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:55:01.693043 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:55:01.693049 | orchestrator | 2025-05-29 00:55:01.693055 | orchestrator | TASK [proxysql-config : Copying over memcached ProxySQL rules config] ********** 2025-05-29 00:55:01.693066 | orchestrator | Thursday 29 May 2025 00:51:41 +0000 (0:00:00.872) 0:04:04.450 ********** 2025-05-29 00:55:01.693072 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:55:01.693078 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:55:01.693084 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:55:01.693091 | orchestrator | 2025-05-29 00:55:01.693097 | orchestrator | TASK [include_role : mistral] ************************************************** 2025-05-29 00:55:01.693103 | orchestrator | Thursday 29 May 2025 00:51:43 +0000 (0:00:01.655) 0:04:06.106 ********** 2025-05-29 00:55:01.693109 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:55:01.693115 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:55:01.693121 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:55:01.693135 | orchestrator | 2025-05-29 00:55:01.693141 | orchestrator | TASK [include_role : neutron] ************************************************** 2025-05-29 00:55:01.693147 | orchestrator | Thursday 29 May 2025 00:51:43 +0000 (0:00:00.290) 0:04:06.396 ********** 2025-05-29 00:55:01.693154 | orchestrator | included: neutron for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-29 00:55:01.693160 | orchestrator | 2025-05-29 00:55:01.693166 | orchestrator | TASK [haproxy-config : Copying over neutron haproxy config] ******************** 2025-05-29 00:55:01.693172 | orchestrator | Thursday 29 May 2025 00:51:45 +0000 (0:00:01.574) 0:04:07.970 ********** 2025-05-29 00:55:01.693179 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-29 00:55:01.693186 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-29 00:55:01.693195 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-29 00:55:01.693206 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-29 00:55:01.693217 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-29 00:55:01.693224 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-29 00:55:01.693232 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 00:55:01.693239 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 00:55:01.693251 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-29 00:55:01.693261 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-29 00:55:01.693274 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-29 00:55:01.693280 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 00:55:01.693287 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 00:55:01.693293 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-29 00:55:01.693303 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-29 00:55:01.693311 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-29 00:55:01.693326 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-29 00:55:01.693333 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-29 00:55:01.693339 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-29 00:55:01.693363 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-29 00:55:01.693374 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-29 00:55:01.693385 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-29 00:55:01.693392 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-29 00:55:01.693399 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-29 00:55:01.693437 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-29 00:55:01.693449 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-29 00:55:01.693465 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-29 00:55:01.693472 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-29 00:55:01.693479 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-29 00:55:01.693485 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 00:55:01.693492 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 00:55:01.693502 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 00:55:01.693513 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 00:55:01.693524 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-29 00:55:01.693531 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-29 00:55:01.693538 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-29 00:55:01.693544 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-29 00:55:01.693554 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-29 00:55:01.693567 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 00:55:01.693578 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-29 00:55:01.693585 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 00:55:01.693591 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 00:55:01.693598 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-29 00:55:01.693604 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 00:55:01.693614 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-29 00:55:01.693630 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-29 00:55:01.693637 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-29 00:55:01.693643 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-29 00:55:01.693650 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-29 00:55:01.693660 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-29 00:55:01.693671 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-29 00:55:01.693677 | orchestrator | 2025-05-29 00:55:01.693684 | orchestrator | TASK [haproxy-config : Add configuration for neutron when using single external frontend] *** 2025-05-29 00:55:01.693690 | orchestrator | Thursday 29 May 2025 00:51:50 +0000 (0:00:05.259) 0:04:13.229 ********** 2025-05-29 00:55:01.693701 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-29 00:55:01.693708 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-29 00:55:01.693714 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-29 00:55:01.693728 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-29 00:55:01.693735 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-29 00:55:01.693746 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-29 00:55:01.693753 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-29 00:55:01.693760 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-29 00:55:01.693776 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-29 00:55:01.693783 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-29 00:55:01.693793 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-29 00:55:01.693800 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-29 00:55:01.693807 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 00:55:01.693814 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 00:55:01.693820 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 00:55:01.693830 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 00:55:01.693840 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-29 00:55:01.693850 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'ha2025-05-29 00:55:01 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:55:01.694179 | orchestrator | proxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-29 00:55:01.694270 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-29 00:55:01.694287 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-29 00:55:01.694323 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-29 00:55:01.694374 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-29 00:55:01.694398 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-29 00:55:01.694443 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-29 00:55:01.694457 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-29 00:55:01.694469 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 00:55:01.694491 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-29 00:55:01.694509 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 00:55:01.694521 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 00:55:01.694543 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-29 00:55:01.694570 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 00:55:01.694583 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-29 00:55:01.694601 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-29 00:55:01.694619 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-29 00:55:01.694632 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-29 00:55:01.694656 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 00:55:01.694669 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-29 00:55:01.694688 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-29 00:55:01.694700 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 00:55:01.694717 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-29 00:55:01.694729 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-29 00:55:01.694747 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-29 00:55:01.694760 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:55:01.694774 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-29 00:55:01.694793 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-29 00:55:01.694804 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:55:01.694816 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-29 00:55:01.694832 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 00:55:01.694844 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 00:55:01.694862 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-29 00:55:01.694874 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-29 00:55:01.694892 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-29 00:55:01.694905 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-29 00:55:01.694916 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:55:01.694927 | orchestrator | 2025-05-29 00:55:01.694940 | orchestrator | TASK [haproxy-config : Configuring firewall for neutron] *********************** 2025-05-29 00:55:01.694952 | orchestrator | Thursday 29 May 2025 00:51:52 +0000 (0:00:02.080) 0:04:15.310 ********** 2025-05-29 00:55:01.694968 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-05-29 00:55:01.694981 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-05-29 00:55:01.694992 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:55:01.695004 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-05-29 00:55:01.695015 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-05-29 00:55:01.695025 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:55:01.695037 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}})  2025-05-29 00:55:01.695048 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron_server_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}})  2025-05-29 00:55:01.695060 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:55:01.695070 | orchestrator | 2025-05-29 00:55:01.695081 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL users config] ************ 2025-05-29 00:55:01.695099 | orchestrator | Thursday 29 May 2025 00:51:54 +0000 (0:00:02.109) 0:04:17.420 ********** 2025-05-29 00:55:01.695110 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:55:01.695121 | orchestrator | changed: [testbed-node-1] 2025-05-29 00:55:01.695139 | orchestrator | changed: [testbed-node-2] 2025-05-29 00:55:01.695150 | orchestrator | 2025-05-29 00:55:01.695161 | orchestrator | TASK [proxysql-config : Copying over neutron ProxySQL rules config] ************ 2025-05-29 00:55:01.695173 | orchestrator | Thursday 29 May 2025 00:51:56 +0000 (0:00:01.371) 0:04:18.791 ********** 2025-05-29 00:55:01.695183 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:55:01.695195 | orchestrator | changed: [testbed-node-1] 2025-05-29 00:55:01.695206 | orchestrator | changed: [testbed-node-2] 2025-05-29 00:55:01.695217 | orchestrator | 2025-05-29 00:55:01.695227 | orchestrator | TASK [include_role : placement] ************************************************ 2025-05-29 00:55:01.695238 | orchestrator | Thursday 29 May 2025 00:51:58 +0000 (0:00:02.321) 0:04:21.113 ********** 2025-05-29 00:55:01.695249 | orchestrator | included: placement for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-29 00:55:01.695260 | orchestrator | 2025-05-29 00:55:01.695271 | orchestrator | TASK [haproxy-config : Copying over placement haproxy config] ****************** 2025-05-29 00:55:01.695282 | orchestrator | Thursday 29 May 2025 00:52:00 +0000 (0:00:01.541) 0:04:22.654 ********** 2025-05-29 00:55:01.695295 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-29 00:55:01.695307 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-29 00:55:01.695324 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-29 00:55:01.695336 | orchestrator | 2025-05-29 00:55:01.695371 | orchestrator | TASK [haproxy-config : Add configuration for placement when using single external frontend] *** 2025-05-29 00:55:01.695397 | orchestrator | Thursday 29 May 2025 00:52:03 +0000 (0:00:03.680) 0:04:26.334 ********** 2025-05-29 00:55:01.695417 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-29 00:55:01.695430 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:55:01.695442 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-29 00:55:01.695454 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:55:01.695465 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-29 00:55:01.695477 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:55:01.695488 | orchestrator | 2025-05-29 00:55:01.695498 | orchestrator | TASK [haproxy-config : Configuring firewall for placement] ********************* 2025-05-29 00:55:01.695509 | orchestrator | Thursday 29 May 2025 00:52:04 +0000 (0:00:00.772) 0:04:27.107 ********** 2025-05-29 00:55:01.695525 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-05-29 00:55:01.695538 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-05-29 00:55:01.695549 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:55:01.695561 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-05-29 00:55:01.695579 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-05-29 00:55:01.695591 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:55:01.695602 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-05-29 00:55:01.695621 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}})  2025-05-29 00:55:01.695632 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:55:01.695643 | orchestrator | 2025-05-29 00:55:01.695654 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL users config] ********** 2025-05-29 00:55:01.695665 | orchestrator | Thursday 29 May 2025 00:52:05 +0000 (0:00:00.977) 0:04:28.084 ********** 2025-05-29 00:55:01.695677 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:55:01.695688 | orchestrator | changed: [testbed-node-1] 2025-05-29 00:55:01.695699 | orchestrator | changed: [testbed-node-2] 2025-05-29 00:55:01.695709 | orchestrator | 2025-05-29 00:55:01.695721 | orchestrator | TASK [proxysql-config : Copying over placement ProxySQL rules config] ********** 2025-05-29 00:55:01.695732 | orchestrator | Thursday 29 May 2025 00:52:07 +0000 (0:00:01.457) 0:04:29.541 ********** 2025-05-29 00:55:01.695743 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:55:01.695755 | orchestrator | changed: [testbed-node-1] 2025-05-29 00:55:01.695765 | orchestrator | changed: [testbed-node-2] 2025-05-29 00:55:01.695776 | orchestrator | 2025-05-29 00:55:01.695787 | orchestrator | TASK [include_role : nova] ***************************************************** 2025-05-29 00:55:01.695799 | orchestrator | Thursday 29 May 2025 00:52:09 +0000 (0:00:02.400) 0:04:31.942 ********** 2025-05-29 00:55:01.695809 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-29 00:55:01.695820 | orchestrator | 2025-05-29 00:55:01.695831 | orchestrator | TASK [haproxy-config : Copying over nova haproxy config] *********************** 2025-05-29 00:55:01.695842 | orchestrator | Thursday 29 May 2025 00:52:11 +0000 (0:00:01.682) 0:04:33.624 ********** 2025-05-29 00:55:01.695855 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-29 00:55:01.695874 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-29 00:55:01.695902 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-29 00:55:01.695915 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-29 00:55:01.695928 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-29 00:55:01.695940 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-29 00:55:01.695962 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-29 00:55:01.695975 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-29 00:55:01.695993 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-29 00:55:01.696006 | orchestrator | 2025-05-29 00:55:01.696018 | orchestrator | TASK [haproxy-config : Add configuration for nova when using single external frontend] *** 2025-05-29 00:55:01.696029 | orchestrator | Thursday 29 May 2025 00:52:16 +0000 (0:00:05.477) 0:04:39.102 ********** 2025-05-29 00:55:01.696041 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-29 00:55:01.696054 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-29 00:55:01.696076 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-29 00:55:01.696088 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:55:01.696107 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-29 00:55:01.696120 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-29 00:55:01.696131 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-29 00:55:01.696142 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:55:01.696155 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-29 00:55:01.696178 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-29 00:55:01.696190 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-29 00:55:01.696201 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:55:01.696213 | orchestrator | 2025-05-29 00:55:01.696224 | orchestrator | TASK [haproxy-config : Configuring firewall for nova] ************************** 2025-05-29 00:55:01.696235 | orchestrator | Thursday 29 May 2025 00:52:17 +0000 (0:00:00.841) 0:04:39.943 ********** 2025-05-29 00:55:01.696253 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-05-29 00:55:01.696265 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-05-29 00:55:01.696277 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-05-29 00:55:01.696288 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-05-29 00:55:01.696299 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:55:01.696310 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-05-29 00:55:01.696321 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-05-29 00:55:01.696332 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-05-29 00:55:01.696376 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-05-29 00:55:01.696398 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:55:01.696409 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-05-29 00:55:01.696421 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_api_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}})  2025-05-29 00:55:01.696432 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-05-29 00:55:01.696444 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_metadata_external', 'value': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}})  2025-05-29 00:55:01.696455 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:55:01.696466 | orchestrator | 2025-05-29 00:55:01.696477 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL users config] *************** 2025-05-29 00:55:01.696488 | orchestrator | Thursday 29 May 2025 00:52:18 +0000 (0:00:01.329) 0:04:41.272 ********** 2025-05-29 00:55:01.696504 | orchestrator | changed: [testbed-node-1] 2025-05-29 00:55:01.696515 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:55:01.696526 | orchestrator | changed: [testbed-node-2] 2025-05-29 00:55:01.696537 | orchestrator | 2025-05-29 00:55:01.696548 | orchestrator | TASK [proxysql-config : Copying over nova ProxySQL rules config] *************** 2025-05-29 00:55:01.696559 | orchestrator | Thursday 29 May 2025 00:52:20 +0000 (0:00:01.425) 0:04:42.698 ********** 2025-05-29 00:55:01.696570 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:55:01.696581 | orchestrator | changed: [testbed-node-1] 2025-05-29 00:55:01.696592 | orchestrator | changed: [testbed-node-2] 2025-05-29 00:55:01.696603 | orchestrator | 2025-05-29 00:55:01.696614 | orchestrator | TASK [include_role : nova-cell] ************************************************ 2025-05-29 00:55:01.696625 | orchestrator | Thursday 29 May 2025 00:52:22 +0000 (0:00:02.486) 0:04:45.185 ********** 2025-05-29 00:55:01.696636 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-29 00:55:01.696647 | orchestrator | 2025-05-29 00:55:01.696658 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-novncproxy] ****************** 2025-05-29 00:55:01.696669 | orchestrator | Thursday 29 May 2025 00:52:24 +0000 (0:00:01.573) 0:04:46.759 ********** 2025-05-29 00:55:01.696681 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-novncproxy) 2025-05-29 00:55:01.696693 | orchestrator | 2025-05-29 00:55:01.696705 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config] *** 2025-05-29 00:55:01.696716 | orchestrator | Thursday 29 May 2025 00:52:25 +0000 (0:00:01.596) 0:04:48.355 ********** 2025-05-29 00:55:01.696734 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-05-29 00:55:01.696748 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-05-29 00:55:01.696767 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}}) 2025-05-29 00:55:01.696779 | orchestrator | 2025-05-29 00:55:01.696790 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-novncproxy when using single external frontend] *** 2025-05-29 00:55:01.696801 | orchestrator | Thursday 29 May 2025 00:52:31 +0000 (0:00:05.182) 0:04:53.537 ********** 2025-05-29 00:55:01.696813 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-29 00:55:01.696824 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:55:01.696836 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-29 00:55:01.696848 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:55:01.696865 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'group': 'nova-novncproxy', 'enabled': True, 'haproxy': {'nova_novncproxy': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_novncproxy_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-29 00:55:01.696877 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:55:01.696889 | orchestrator | 2025-05-29 00:55:01.696899 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-novncproxy] ***** 2025-05-29 00:55:01.696911 | orchestrator | Thursday 29 May 2025 00:52:32 +0000 (0:00:01.193) 0:04:54.731 ********** 2025-05-29 00:55:01.696922 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-05-29 00:55:01.696934 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-05-29 00:55:01.696946 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:55:01.696957 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-05-29 00:55:01.696988 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-05-29 00:55:01.697011 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:55:01.697023 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-05-29 00:55:01.697035 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova_novncproxy_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6080', 'listen_port': '6080', 'backend_http_extra': ['timeout tunnel 1h']}})  2025-05-29 00:55:01.697046 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:55:01.697057 | orchestrator | 2025-05-29 00:55:01.697068 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-05-29 00:55:01.697079 | orchestrator | Thursday 29 May 2025 00:52:34 +0000 (0:00:02.282) 0:04:57.014 ********** 2025-05-29 00:55:01.697090 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:55:01.697101 | orchestrator | changed: [testbed-node-1] 2025-05-29 00:55:01.697112 | orchestrator | changed: [testbed-node-2] 2025-05-29 00:55:01.697123 | orchestrator | 2025-05-29 00:55:01.697134 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-05-29 00:55:01.697145 | orchestrator | Thursday 29 May 2025 00:52:37 +0000 (0:00:03.037) 0:05:00.052 ********** 2025-05-29 00:55:01.697156 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:55:01.697167 | orchestrator | changed: [testbed-node-1] 2025-05-29 00:55:01.697178 | orchestrator | changed: [testbed-node-2] 2025-05-29 00:55:01.697189 | orchestrator | 2025-05-29 00:55:01.697200 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-spicehtml5proxy] ************* 2025-05-29 00:55:01.697211 | orchestrator | Thursday 29 May 2025 00:52:41 +0000 (0:00:03.538) 0:05:03.590 ********** 2025-05-29 00:55:01.697223 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-spicehtml5proxy) 2025-05-29 00:55:01.697235 | orchestrator | 2025-05-29 00:55:01.697246 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-spicehtml5proxy haproxy config] *** 2025-05-29 00:55:01.697258 | orchestrator | Thursday 29 May 2025 00:52:42 +0000 (0:00:01.358) 0:05:04.949 ********** 2025-05-29 00:55:01.697270 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-29 00:55:01.697282 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:55:01.697298 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-29 00:55:01.697310 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:55:01.697322 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-29 00:55:01.697341 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:55:01.697434 | orchestrator | 2025-05-29 00:55:01.697448 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-spicehtml5proxy when using single external frontend] *** 2025-05-29 00:55:01.697460 | orchestrator | Thursday 29 May 2025 00:52:44 +0000 (0:00:01.917) 0:05:06.867 ********** 2025-05-29 00:55:01.697481 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-29 00:55:01.697493 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:55:01.697506 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-29 00:55:01.697517 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:55:01.697528 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'group': 'nova-spicehtml5proxy', 'enabled': False, 'haproxy': {'nova_spicehtml5proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}, 'nova_spicehtml5proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6082', 'listen_port': '6082', 'backend_http_extra': ['timeout tunnel 1h']}}}})  2025-05-29 00:55:01.697541 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:55:01.697552 | orchestrator | 2025-05-29 00:55:01.697564 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-spicehtml5proxy] *** 2025-05-29 00:55:01.697575 | orchestrator | Thursday 29 May 2025 00:52:46 +0000 (0:00:02.010) 0:05:08.877 ********** 2025-05-29 00:55:01.697587 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:55:01.697597 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:55:01.697608 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:55:01.697619 | orchestrator | 2025-05-29 00:55:01.697630 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-05-29 00:55:01.697641 | orchestrator | Thursday 29 May 2025 00:52:48 +0000 (0:00:01.833) 0:05:10.711 ********** 2025-05-29 00:55:01.697653 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:55:01.697664 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:55:01.697675 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:55:01.697686 | orchestrator | 2025-05-29 00:55:01.697697 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-05-29 00:55:01.697709 | orchestrator | Thursday 29 May 2025 00:52:51 +0000 (0:00:02.960) 0:05:13.671 ********** 2025-05-29 00:55:01.697720 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:55:01.697730 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:55:01.697741 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:55:01.697752 | orchestrator | 2025-05-29 00:55:01.697763 | orchestrator | TASK [nova-cell : Configure loadbalancer for nova-serialproxy] ***************** 2025-05-29 00:55:01.697774 | orchestrator | Thursday 29 May 2025 00:52:54 +0000 (0:00:03.326) 0:05:16.998 ********** 2025-05-29 00:55:01.697785 | orchestrator | included: /ansible/roles/nova-cell/tasks/cell_proxy_loadbalancer.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item=nova-serialproxy) 2025-05-29 00:55:01.697807 | orchestrator | 2025-05-29 00:55:01.697819 | orchestrator | TASK [haproxy-config : Copying over nova-cell:nova-serialproxy haproxy config] *** 2025-05-29 00:55:01.697829 | orchestrator | Thursday 29 May 2025 00:52:55 +0000 (0:00:01.338) 0:05:18.337 ********** 2025-05-29 00:55:01.697846 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-05-29 00:55:01.697859 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:55:01.697871 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-05-29 00:55:01.697882 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:55:01.697900 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-05-29 00:55:01.697913 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:55:01.697925 | orchestrator | 2025-05-29 00:55:01.697937 | orchestrator | TASK [haproxy-config : Add configuration for nova-cell:nova-serialproxy when using single external frontend] *** 2025-05-29 00:55:01.697947 | orchestrator | Thursday 29 May 2025 00:52:57 +0000 (0:00:01.565) 0:05:19.902 ********** 2025-05-29 00:55:01.697957 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-05-29 00:55:01.697968 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:55:01.697977 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-05-29 00:55:01.697988 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:55:01.697998 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'group': 'nova-serialproxy', 'enabled': False, 'haproxy': {'nova_serialconsole_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}, 'nova_serialconsole_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '6083', 'listen_port': '6083', 'backend_http_extra': ['timeout tunnel 10m']}}}})  2025-05-29 00:55:01.698058 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:55:01.698072 | orchestrator | 2025-05-29 00:55:01.698083 | orchestrator | TASK [haproxy-config : Configuring firewall for nova-cell:nova-serialproxy] **** 2025-05-29 00:55:01.698094 | orchestrator | Thursday 29 May 2025 00:52:59 +0000 (0:00:01.933) 0:05:21.836 ********** 2025-05-29 00:55:01.698104 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:55:01.698114 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:55:01.698124 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:55:01.698135 | orchestrator | 2025-05-29 00:55:01.698145 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL users config] ********** 2025-05-29 00:55:01.698155 | orchestrator | Thursday 29 May 2025 00:53:01 +0000 (0:00:02.051) 0:05:23.888 ********** 2025-05-29 00:55:01.698165 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:55:01.698176 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:55:01.698186 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:55:01.698196 | orchestrator | 2025-05-29 00:55:01.698212 | orchestrator | TASK [proxysql-config : Copying over nova-cell ProxySQL rules config] ********** 2025-05-29 00:55:01.698222 | orchestrator | Thursday 29 May 2025 00:53:04 +0000 (0:00:02.812) 0:05:26.700 ********** 2025-05-29 00:55:01.698233 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:55:01.698243 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:55:01.698253 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:55:01.698262 | orchestrator | 2025-05-29 00:55:01.698272 | orchestrator | TASK [include_role : octavia] ************************************************** 2025-05-29 00:55:01.698282 | orchestrator | Thursday 29 May 2025 00:53:08 +0000 (0:00:03.890) 0:05:30.591 ********** 2025-05-29 00:55:01.698292 | orchestrator | included: octavia for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-29 00:55:01.698303 | orchestrator | 2025-05-29 00:55:01.698313 | orchestrator | TASK [haproxy-config : Copying over octavia haproxy config] ******************** 2025-05-29 00:55:01.698323 | orchestrator | Thursday 29 May 2025 00:53:09 +0000 (0:00:01.709) 0:05:32.301 ********** 2025-05-29 00:55:01.698341 | orchestrator | changed: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-05-29 00:55:01.698380 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-05-29 00:55:01.698400 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-05-29 00:55:01.698469 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-05-29 00:55:01.698482 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-05-29 00:55:01.698499 | orchestrator | changed: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-05-29 00:55:01.698510 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-05-29 00:55:01.698540 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-05-29 00:55:01.698552 | orchestrator | changed: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}}) 2025-05-29 00:55:01.698570 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-05-29 00:55:01.698581 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-05-29 00:55:01.698592 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-05-29 00:55:01.698602 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-05-29 00:55:01.698626 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-05-29 00:55:01.698638 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-05-29 00:55:01.698654 | orchestrator | 2025-05-29 00:55:01.698664 | orchestrator | TASK [haproxy-config : Add configuration for octavia when using single external frontend] *** 2025-05-29 00:55:01.698674 | orchestrator | Thursday 29 May 2025 00:53:14 +0000 (0:00:04.725) 0:05:37.027 ********** 2025-05-29 00:55:01.698711 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-05-29 00:55:01.698723 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-05-29 00:55:01.698737 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-05-29 00:55:01.698748 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-05-29 00:55:01.698767 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-05-29 00:55:01.698779 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:55:01.698797 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-05-29 00:55:01.698807 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-05-29 00:55:01.698818 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-05-29 00:55:01.698832 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-05-29 00:55:01.698843 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-05-29 00:55:01.698854 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:55:01.698871 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-api', 'value': {'container_name': 'octavia_api', 'group': 'octavia-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-api:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9876'], 'timeout': '30'}, 'haproxy': {'octavia_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}, 'octavia_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}}}})  2025-05-29 00:55:01.698891 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-driver-agent', 'value': {'container_name': 'octavia_driver_agent', 'group': 'octavia-driver-agent', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-driver-agent:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-driver-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '', 'octavia_driver_agent:/var/run/octavia/'], 'dimensions': {}}})  2025-05-29 00:55:01.698902 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-health-manager', 'value': {'container_name': 'octavia_health_manager', 'group': 'octavia-health-manager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-health-manager:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-health-manager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-health-manager 3306'], 'timeout': '30'}}})  2025-05-29 00:55:01.698912 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-housekeeping', 'value': {'container_name': 'octavia_housekeeping', 'group': 'octavia-housekeeping', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-housekeeping:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-housekeeping/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-housekeeping 3306'], 'timeout': '30'}}})  2025-05-29 00:55:01.698927 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia-worker', 'value': {'container_name': 'octavia_worker', 'group': 'octavia-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/octavia-worker:14.0.1.20241206', 'volumes': ['/etc/kolla/octavia-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port octavia-worker 5672'], 'timeout': '30'}}})  2025-05-29 00:55:01.698937 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:55:01.698947 | orchestrator | 2025-05-29 00:55:01.698957 | orchestrator | TASK [haproxy-config : Configuring firewall for octavia] *********************** 2025-05-29 00:55:01.698967 | orchestrator | Thursday 29 May 2025 00:53:15 +0000 (0:00:00.965) 0:05:37.992 ********** 2025-05-29 00:55:01.698978 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-05-29 00:55:01.698988 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-05-29 00:55:01.698998 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-05-29 00:55:01.699008 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:55:01.699024 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-05-29 00:55:01.699041 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:55:01.699051 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-05-29 00:55:01.699061 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'octavia_api_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9876', 'listen_port': '9876', 'tls_backend': 'no'}})  2025-05-29 00:55:01.699071 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:55:01.699081 | orchestrator | 2025-05-29 00:55:01.699091 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL users config] ************ 2025-05-29 00:55:01.699100 | orchestrator | Thursday 29 May 2025 00:53:16 +0000 (0:00:01.407) 0:05:39.400 ********** 2025-05-29 00:55:01.699110 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:55:01.699119 | orchestrator | changed: [testbed-node-1] 2025-05-29 00:55:01.699130 | orchestrator | changed: [testbed-node-2] 2025-05-29 00:55:01.699139 | orchestrator | 2025-05-29 00:55:01.699150 | orchestrator | TASK [proxysql-config : Copying over octavia ProxySQL rules config] ************ 2025-05-29 00:55:01.699160 | orchestrator | Thursday 29 May 2025 00:53:18 +0000 (0:00:01.450) 0:05:40.851 ********** 2025-05-29 00:55:01.699170 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:55:01.699180 | orchestrator | changed: [testbed-node-1] 2025-05-29 00:55:01.699189 | orchestrator | changed: [testbed-node-2] 2025-05-29 00:55:01.699199 | orchestrator | 2025-05-29 00:55:01.699209 | orchestrator | TASK [include_role : opensearch] *********************************************** 2025-05-29 00:55:01.699218 | orchestrator | Thursday 29 May 2025 00:53:20 +0000 (0:00:02.448) 0:05:43.299 ********** 2025-05-29 00:55:01.699228 | orchestrator | included: opensearch for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-29 00:55:01.699238 | orchestrator | 2025-05-29 00:55:01.699248 | orchestrator | TASK [haproxy-config : Copying over opensearch haproxy config] ***************** 2025-05-29 00:55:01.699257 | orchestrator | Thursday 29 May 2025 00:53:22 +0000 (0:00:01.557) 0:05:44.857 ********** 2025-05-29 00:55:01.699268 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-29 00:55:01.699284 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-29 00:55:01.699308 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-29 00:55:01.699321 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-29 00:55:01.699332 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-29 00:55:01.699377 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-29 00:55:01.699397 | orchestrator | 2025-05-29 00:55:01.699407 | orchestrator | TASK [haproxy-config : Add configuration for opensearch when using single external frontend] *** 2025-05-29 00:55:01.699417 | orchestrator | Thursday 29 May 2025 00:53:28 +0000 (0:00:06.642) 0:05:51.499 ********** 2025-05-29 00:55:01.699434 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-05-29 00:55:01.699445 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-05-29 00:55:01.699456 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:55:01.699467 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-05-29 00:55:01.699482 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-05-29 00:55:01.699498 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:55:01.699514 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-05-29 00:55:01.699525 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-05-29 00:55:01.699536 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:55:01.699546 | orchestrator | 2025-05-29 00:55:01.699556 | orchestrator | TASK [haproxy-config : Configuring firewall for opensearch] ******************** 2025-05-29 00:55:01.699566 | orchestrator | Thursday 29 May 2025 00:53:29 +0000 (0:00:00.937) 0:05:52.437 ********** 2025-05-29 00:55:01.699576 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-05-29 00:55:01.699586 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-05-29 00:55:01.699596 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-05-29 00:55:01.699607 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:55:01.699617 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-05-29 00:55:01.699627 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-05-29 00:55:01.699637 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-05-29 00:55:01.699653 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:55:01.699667 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}})  2025-05-29 00:55:01.699677 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-05-29 00:55:01.699688 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch_dashboards_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}})  2025-05-29 00:55:01.699698 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:55:01.699708 | orchestrator | 2025-05-29 00:55:01.699717 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL users config] ********* 2025-05-29 00:55:01.699727 | orchestrator | Thursday 29 May 2025 00:53:31 +0000 (0:00:01.374) 0:05:53.812 ********** 2025-05-29 00:55:01.699737 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:55:01.699747 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:55:01.699756 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:55:01.699766 | orchestrator | 2025-05-29 00:55:01.699776 | orchestrator | TASK [proxysql-config : Copying over opensearch ProxySQL rules config] ********* 2025-05-29 00:55:01.699785 | orchestrator | Thursday 29 May 2025 00:53:32 +0000 (0:00:00.734) 0:05:54.546 ********** 2025-05-29 00:55:01.699795 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:55:01.699805 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:55:01.699815 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:55:01.699825 | orchestrator | 2025-05-29 00:55:01.699840 | orchestrator | TASK [include_role : prometheus] *********************************************** 2025-05-29 00:55:01.699850 | orchestrator | Thursday 29 May 2025 00:53:33 +0000 (0:00:01.759) 0:05:56.305 ********** 2025-05-29 00:55:01.699861 | orchestrator | included: prometheus for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-29 00:55:01.699871 | orchestrator | 2025-05-29 00:55:01.699880 | orchestrator | TASK [haproxy-config : Copying over prometheus haproxy config] ***************** 2025-05-29 00:55:01.699890 | orchestrator | Thursday 29 May 2025 00:53:35 +0000 (0:00:01.904) 0:05:58.210 ********** 2025-05-29 00:55:01.699900 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-05-29 00:55:01.699912 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-29 00:55:01.699923 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 00:55:01.699941 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 00:55:01.699956 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-05-29 00:55:01.699967 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-29 00:55:01.699983 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-29 00:55:01.699994 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 00:55:01.700004 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 00:55:01.700015 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-29 00:55:01.700036 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-05-29 00:55:01.700047 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-29 00:55:01.700057 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 00:55:01.700073 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 00:55:01.700084 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-29 00:55:01.700095 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-05-29 00:55:01.700115 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-29 00:55:01.700130 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-05-29 00:55:01.700148 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 00:55:01.700159 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 00:55:01.700169 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-29 00:55:01.700186 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-29 00:55:01.700196 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 00:55:01.700211 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 00:55:01.700221 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 00:55:01.700237 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-29 00:55:01.700251 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 00:55:01.700267 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-05-29 00:55:01.700285 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-29 00:55:01.700301 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 00:55:01.700311 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 00:55:01.700327 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-29 00:55:01.700337 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 00:55:01.700410 | orchestrator | 2025-05-29 00:55:01.700422 | orchestrator | TASK [haproxy-config : Add configuration for prometheus when using single external frontend] *** 2025-05-29 00:55:01.700432 | orchestrator | Thursday 29 May 2025 00:53:40 +0000 (0:00:04.966) 0:06:03.176 ********** 2025-05-29 00:55:01.700447 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-29 00:55:01.700455 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-29 00:55:01.700463 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 00:55:01.700476 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 00:55:01.700485 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-29 00:55:01.700499 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-29 00:55:01.700514 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-29 00:55:01.700523 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 00:55:01.700531 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 00:55:01.700543 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-29 00:55:01.700552 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 00:55:01.700560 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:55:01.700576 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-29 00:55:01.700585 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-29 00:55:01.700601 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 00:55:01.700609 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 00:55:01.700618 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-29 00:55:01.700630 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-29 00:55:01.700645 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-29 00:55:01.700658 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 00:55:01.700667 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 00:55:01.700675 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-29 00:55:01.700684 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 00:55:01.700692 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:55:01.700705 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-29 00:55:01.700713 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-29 00:55:01.700883 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 00:55:01.700903 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 00:55:01.700912 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-29 00:55:01.700921 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-29 00:55:01.700937 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-29 00:55:01.700946 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 00:55:01.700958 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 00:55:01.700972 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-29 00:55:01.700981 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 00:55:01.700989 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:55:01.700997 | orchestrator | 2025-05-29 00:55:01.701005 | orchestrator | TASK [haproxy-config : Configuring firewall for prometheus] ******************** 2025-05-29 00:55:01.701013 | orchestrator | Thursday 29 May 2025 00:53:41 +0000 (0:00:01.210) 0:06:04.387 ********** 2025-05-29 00:55:01.701022 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-05-29 00:55:01.701030 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-05-29 00:55:01.701040 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-05-29 00:55:01.701049 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-05-29 00:55:01.701058 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:55:01.701066 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-05-29 00:55:01.701078 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-05-29 00:55:01.701087 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-05-29 00:55:01.701095 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-05-29 00:55:01.701109 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:55:01.701117 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}})  2025-05-29 00:55:01.701125 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_server_external', 'value': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}})  2025-05-29 00:55:01.701137 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager', 'value': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-05-29 00:55:01.701146 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus_alertmanager_external', 'value': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}})  2025-05-29 00:55:01.701154 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:55:01.701162 | orchestrator | 2025-05-29 00:55:01.701170 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL users config] ********* 2025-05-29 00:55:01.701178 | orchestrator | Thursday 29 May 2025 00:53:43 +0000 (0:00:01.927) 0:06:06.314 ********** 2025-05-29 00:55:01.701186 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:55:01.701194 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:55:01.701202 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:55:01.701210 | orchestrator | 2025-05-29 00:55:01.701218 | orchestrator | TASK [proxysql-config : Copying over prometheus ProxySQL rules config] ********* 2025-05-29 00:55:01.701226 | orchestrator | Thursday 29 May 2025 00:53:44 +0000 (0:00:01.035) 0:06:07.349 ********** 2025-05-29 00:55:01.701234 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:55:01.701242 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:55:01.701250 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:55:01.701258 | orchestrator | 2025-05-29 00:55:01.701266 | orchestrator | TASK [include_role : rabbitmq] ************************************************* 2025-05-29 00:55:01.701274 | orchestrator | Thursday 29 May 2025 00:53:46 +0000 (0:00:01.752) 0:06:09.102 ********** 2025-05-29 00:55:01.701282 | orchestrator | included: rabbitmq for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-29 00:55:01.701290 | orchestrator | 2025-05-29 00:55:01.701298 | orchestrator | TASK [haproxy-config : Copying over rabbitmq haproxy config] ******************* 2025-05-29 00:55:01.701306 | orchestrator | Thursday 29 May 2025 00:53:48 +0000 (0:00:01.590) 0:06:10.692 ********** 2025-05-29 00:55:01.701314 | orchestrator | changed: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-29 00:55:01.701327 | orchestrator | changed: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-29 00:55:01.701369 | orchestrator | changed: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}}) 2025-05-29 00:55:01.701380 | orchestrator | 2025-05-29 00:55:01.701388 | orchestrator | TASK [haproxy-config : Add configuration for rabbitmq when using single external frontend] *** 2025-05-29 00:55:01.701397 | orchestrator | Thursday 29 May 2025 00:53:51 +0000 (0:00:03.025) 0:06:13.717 ********** 2025-05-29 00:55:01.701405 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-05-29 00:55:01.701414 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:55:01.701422 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-05-29 00:55:01.701450 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:55:01.701464 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq', 'value': {'container_name': 'rabbitmq', 'group': None, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/rabbitmq:3.13.7.20241206', 'bootstrap_environment': {'KOLLA_BOOTSTRAP': None, 'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'RABBITMQ_CLUSTER_COOKIE': None, 'RABBITMQ_LOG_DIR': '/var/log/kolla/rabbitmq'}, 'volumes': ['/etc/kolla/rabbitmq/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'rabbitmq:/var/lib/rabbitmq/', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_rabbitmq'], 'timeout': '30'}, 'haproxy': {'rabbitmq_management': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}}}})  2025-05-29 00:55:01.701473 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:55:01.701481 | orchestrator | 2025-05-29 00:55:01.701488 | orchestrator | TASK [haproxy-config : Configuring firewall for rabbitmq] ********************** 2025-05-29 00:55:01.701496 | orchestrator | Thursday 29 May 2025 00:53:51 +0000 (0:00:00.725) 0:06:14.443 ********** 2025-05-29 00:55:01.701504 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-05-29 00:55:01.701514 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:55:01.701528 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-05-29 00:55:01.701538 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:55:01.701548 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'rabbitmq_management', 'value': {'enabled': 'yes', 'mode': 'http', 'port': '15672', 'host_group': 'rabbitmq'}})  2025-05-29 00:55:01.701557 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:55:01.701566 | orchestrator | 2025-05-29 00:55:01.701576 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL users config] *********** 2025-05-29 00:55:01.701585 | orchestrator | Thursday 29 May 2025 00:53:52 +0000 (0:00:00.854) 0:06:15.297 ********** 2025-05-29 00:55:01.701594 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:55:01.701603 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:55:01.701613 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:55:01.701622 | orchestrator | 2025-05-29 00:55:01.701631 | orchestrator | TASK [proxysql-config : Copying over rabbitmq ProxySQL rules config] *********** 2025-05-29 00:55:01.701640 | orchestrator | Thursday 29 May 2025 00:53:53 +0000 (0:00:00.746) 0:06:16.044 ********** 2025-05-29 00:55:01.701650 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:55:01.701660 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:55:01.701669 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:55:01.701677 | orchestrator | 2025-05-29 00:55:01.701685 | orchestrator | TASK [include_role : skyline] ************************************************** 2025-05-29 00:55:01.701693 | orchestrator | Thursday 29 May 2025 00:53:55 +0000 (0:00:01.888) 0:06:17.932 ********** 2025-05-29 00:55:01.701701 | orchestrator | included: skyline for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-29 00:55:01.701709 | orchestrator | 2025-05-29 00:55:01.701717 | orchestrator | TASK [haproxy-config : Copying over skyline haproxy config] ******************** 2025-05-29 00:55:01.701725 | orchestrator | Thursday 29 May 2025 00:53:57 +0000 (0:00:01.959) 0:06:19.892 ********** 2025-05-29 00:55:01.701734 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-05-29 00:55:01.701753 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-05-29 00:55:01.701766 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}}) 2025-05-29 00:55:01.701775 | orchestrator | changed: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-05-29 00:55:01.701784 | orchestrator | changed: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-05-29 00:55:01.701801 | orchestrator | changed: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}}) 2025-05-29 00:55:01.701810 | orchestrator | 2025-05-29 00:55:01.701818 | orchestrator | TASK [haproxy-config : Add configuration for skyline when using single external frontend] *** 2025-05-29 00:55:01.701827 | orchestrator | Thursday 29 May 2025 00:54:05 +0000 (0:00:07.990) 0:06:27.882 ********** 2025-05-29 00:55:01.701835 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-05-29 00:55:01.701849 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-05-29 00:55:01.701857 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:55:01.701866 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-05-29 00:55:01.701880 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-05-29 00:55:01.701888 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:55:01.701900 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-apiserver', 'value': {'container_name': 'skyline_apiserver', 'group': 'skyline-apiserver', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-apiserver:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-apiserver/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9998/docs'], 'timeout': '30'}, 'haproxy': {'skyline_apiserver': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}, 'skyline_apiserver_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}}}})  2025-05-29 00:55:01.701914 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline-console', 'value': {'container_name': 'skyline_console', 'group': 'skyline-console', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/skyline-console:4.0.2.20241206', 'volumes': ['/etc/kolla/skyline-console/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9999/docs'], 'timeout': '30'}, 'haproxy': {'skyline_console': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}, 'skyline_console_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}}}})  2025-05-29 00:55:01.701922 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:55:01.701930 | orchestrator | 2025-05-29 00:55:01.701938 | orchestrator | TASK [haproxy-config : Configuring firewall for skyline] *********************** 2025-05-29 00:55:01.701947 | orchestrator | Thursday 29 May 2025 00:54:06 +0000 (0:00:01.069) 0:06:28.952 ********** 2025-05-29 00:55:01.701955 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-05-29 00:55:01.701963 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-05-29 00:55:01.701976 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-05-29 00:55:01.701984 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-05-29 00:55:01.701993 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:55:01.702001 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-05-29 00:55:01.702009 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-05-29 00:55:01.702043 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-05-29 00:55:01.702051 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-05-29 00:55:01.702061 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:55:01.702076 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-05-29 00:55:01.702084 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_apiserver_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9998', 'listen_port': '9998', 'tls_backend': 'no'}})  2025-05-29 00:55:01.702092 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console', 'value': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-05-29 00:55:01.702100 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'skyline_console_external', 'value': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9999', 'listen_port': '9999', 'tls_backend': 'no'}})  2025-05-29 00:55:01.702108 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:55:01.702116 | orchestrator | 2025-05-29 00:55:01.702124 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL users config] ************ 2025-05-29 00:55:01.702132 | orchestrator | Thursday 29 May 2025 00:54:08 +0000 (0:00:01.772) 0:06:30.725 ********** 2025-05-29 00:55:01.702140 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:55:01.702148 | orchestrator | changed: [testbed-node-1] 2025-05-29 00:55:01.702156 | orchestrator | changed: [testbed-node-2] 2025-05-29 00:55:01.702164 | orchestrator | 2025-05-29 00:55:01.702172 | orchestrator | TASK [proxysql-config : Copying over skyline ProxySQL rules config] ************ 2025-05-29 00:55:01.702179 | orchestrator | Thursday 29 May 2025 00:54:09 +0000 (0:00:01.500) 0:06:32.225 ********** 2025-05-29 00:55:01.702187 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:55:01.702195 | orchestrator | changed: [testbed-node-1] 2025-05-29 00:55:01.702203 | orchestrator | changed: [testbed-node-2] 2025-05-29 00:55:01.702211 | orchestrator | 2025-05-29 00:55:01.702224 | orchestrator | TASK [include_role : swift] **************************************************** 2025-05-29 00:55:01.702232 | orchestrator | Thursday 29 May 2025 00:54:12 +0000 (0:00:02.564) 0:06:34.789 ********** 2025-05-29 00:55:01.702245 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:55:01.702254 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:55:01.702262 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:55:01.702269 | orchestrator | 2025-05-29 00:55:01.702277 | orchestrator | TASK [include_role : tacker] *************************************************** 2025-05-29 00:55:01.702285 | orchestrator | Thursday 29 May 2025 00:54:12 +0000 (0:00:00.319) 0:06:35.109 ********** 2025-05-29 00:55:01.702293 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:55:01.702301 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:55:01.702309 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:55:01.702316 | orchestrator | 2025-05-29 00:55:01.702324 | orchestrator | TASK [include_role : trove] **************************************************** 2025-05-29 00:55:01.702332 | orchestrator | Thursday 29 May 2025 00:54:13 +0000 (0:00:00.586) 0:06:35.696 ********** 2025-05-29 00:55:01.702340 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:55:01.702367 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:55:01.702375 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:55:01.702384 | orchestrator | 2025-05-29 00:55:01.702392 | orchestrator | TASK [include_role : venus] **************************************************** 2025-05-29 00:55:01.702400 | orchestrator | Thursday 29 May 2025 00:54:13 +0000 (0:00:00.586) 0:06:36.283 ********** 2025-05-29 00:55:01.702408 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:55:01.702415 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:55:01.702423 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:55:01.702431 | orchestrator | 2025-05-29 00:55:01.702439 | orchestrator | TASK [include_role : watcher] ************************************************** 2025-05-29 00:55:01.702447 | orchestrator | Thursday 29 May 2025 00:54:14 +0000 (0:00:00.323) 0:06:36.607 ********** 2025-05-29 00:55:01.702455 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:55:01.702463 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:55:01.702470 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:55:01.702478 | orchestrator | 2025-05-29 00:55:01.702486 | orchestrator | TASK [include_role : zun] ****************************************************** 2025-05-29 00:55:01.702494 | orchestrator | Thursday 29 May 2025 00:54:14 +0000 (0:00:00.706) 0:06:37.313 ********** 2025-05-29 00:55:01.702502 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:55:01.702510 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:55:01.702518 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:55:01.702525 | orchestrator | 2025-05-29 00:55:01.702533 | orchestrator | RUNNING HANDLER [loadbalancer : Check IP addresses on the API interface] ******* 2025-05-29 00:55:01.702541 | orchestrator | Thursday 29 May 2025 00:54:15 +0000 (0:00:01.073) 0:06:38.387 ********** 2025-05-29 00:55:01.702549 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:55:01.702558 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:55:01.702566 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:55:01.702574 | orchestrator | 2025-05-29 00:55:01.702582 | orchestrator | RUNNING HANDLER [loadbalancer : Group HA nodes by status] ********************** 2025-05-29 00:55:01.702589 | orchestrator | Thursday 29 May 2025 00:54:16 +0000 (0:00:00.660) 0:06:39.048 ********** 2025-05-29 00:55:01.702597 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:55:01.702605 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:55:01.702613 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:55:01.702621 | orchestrator | 2025-05-29 00:55:01.702629 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup keepalived container] ************** 2025-05-29 00:55:01.702637 | orchestrator | Thursday 29 May 2025 00:54:17 +0000 (0:00:00.621) 0:06:39.669 ********** 2025-05-29 00:55:01.702645 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:55:01.702652 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:55:01.702660 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:55:01.702668 | orchestrator | 2025-05-29 00:55:01.702676 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup haproxy container] ***************** 2025-05-29 00:55:01.702684 | orchestrator | Thursday 29 May 2025 00:54:18 +0000 (0:00:01.327) 0:06:40.997 ********** 2025-05-29 00:55:01.702692 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:55:01.702704 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:55:01.702712 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:55:01.702720 | orchestrator | 2025-05-29 00:55:01.702728 | orchestrator | RUNNING HANDLER [loadbalancer : Stop backup proxysql container] **************** 2025-05-29 00:55:01.702740 | orchestrator | Thursday 29 May 2025 00:54:19 +0000 (0:00:01.497) 0:06:42.494 ********** 2025-05-29 00:55:01.702748 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:55:01.702756 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:55:01.702764 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:55:01.702772 | orchestrator | 2025-05-29 00:55:01.702780 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup haproxy container] **************** 2025-05-29 00:55:01.702787 | orchestrator | Thursday 29 May 2025 00:54:20 +0000 (0:00:00.989) 0:06:43.483 ********** 2025-05-29 00:55:01.702795 | orchestrator | changed: [testbed-node-1] 2025-05-29 00:55:01.702803 | orchestrator | changed: [testbed-node-2] 2025-05-29 00:55:01.702811 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:55:01.702819 | orchestrator | 2025-05-29 00:55:01.702827 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup haproxy to start] ************** 2025-05-29 00:55:01.702835 | orchestrator | Thursday 29 May 2025 00:54:29 +0000 (0:00:08.939) 0:06:52.423 ********** 2025-05-29 00:55:01.702843 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:55:01.702851 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:55:01.702859 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:55:01.702866 | orchestrator | 2025-05-29 00:55:01.702874 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup proxysql container] *************** 2025-05-29 00:55:01.702882 | orchestrator | Thursday 29 May 2025 00:54:31 +0000 (0:00:01.165) 0:06:53.588 ********** 2025-05-29 00:55:01.702890 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:55:01.702898 | orchestrator | changed: [testbed-node-2] 2025-05-29 00:55:01.702906 | orchestrator | changed: [testbed-node-1] 2025-05-29 00:55:01.702914 | orchestrator | 2025-05-29 00:55:01.702922 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for backup proxysql to start] ************* 2025-05-29 00:55:01.702930 | orchestrator | Thursday 29 May 2025 00:54:42 +0000 (0:00:11.869) 0:07:05.458 ********** 2025-05-29 00:55:01.702938 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:55:01.702946 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:55:01.702953 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:55:01.702961 | orchestrator | 2025-05-29 00:55:01.702974 | orchestrator | RUNNING HANDLER [loadbalancer : Start backup keepalived container] ************* 2025-05-29 00:55:01.702982 | orchestrator | Thursday 29 May 2025 00:54:43 +0000 (0:00:00.726) 0:07:06.184 ********** 2025-05-29 00:55:01.702990 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:55:01.702998 | orchestrator | changed: [testbed-node-1] 2025-05-29 00:55:01.703006 | orchestrator | changed: [testbed-node-2] 2025-05-29 00:55:01.703013 | orchestrator | 2025-05-29 00:55:01.703021 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master haproxy container] ***************** 2025-05-29 00:55:01.703029 | orchestrator | Thursday 29 May 2025 00:54:48 +0000 (0:00:04.888) 0:07:11.072 ********** 2025-05-29 00:55:01.703037 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:55:01.703045 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:55:01.703052 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:55:01.703060 | orchestrator | 2025-05-29 00:55:01.703068 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master proxysql container] **************** 2025-05-29 00:55:01.703076 | orchestrator | Thursday 29 May 2025 00:54:49 +0000 (0:00:00.607) 0:07:11.679 ********** 2025-05-29 00:55:01.703083 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:55:01.703091 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:55:01.703099 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:55:01.703107 | orchestrator | 2025-05-29 00:55:01.703115 | orchestrator | RUNNING HANDLER [loadbalancer : Stop master keepalived container] ************** 2025-05-29 00:55:01.703122 | orchestrator | Thursday 29 May 2025 00:54:49 +0000 (0:00:00.340) 0:07:12.020 ********** 2025-05-29 00:55:01.703130 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:55:01.703155 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:55:01.703163 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:55:01.703180 | orchestrator | 2025-05-29 00:55:01.703188 | orchestrator | RUNNING HANDLER [loadbalancer : Start master haproxy container] **************** 2025-05-29 00:55:01.703196 | orchestrator | Thursday 29 May 2025 00:54:50 +0000 (0:00:00.608) 0:07:12.628 ********** 2025-05-29 00:55:01.703204 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:55:01.703212 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:55:01.703220 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:55:01.703228 | orchestrator | 2025-05-29 00:55:01.703236 | orchestrator | RUNNING HANDLER [loadbalancer : Start master proxysql container] *************** 2025-05-29 00:55:01.703243 | orchestrator | Thursday 29 May 2025 00:54:50 +0000 (0:00:00.623) 0:07:13.252 ********** 2025-05-29 00:55:01.703251 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:55:01.703259 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:55:01.703267 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:55:01.703274 | orchestrator | 2025-05-29 00:55:01.703282 | orchestrator | RUNNING HANDLER [loadbalancer : Start master keepalived container] ************* 2025-05-29 00:55:01.703290 | orchestrator | Thursday 29 May 2025 00:54:51 +0000 (0:00:00.613) 0:07:13.865 ********** 2025-05-29 00:55:01.703298 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:55:01.703306 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:55:01.703313 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:55:01.703321 | orchestrator | 2025-05-29 00:55:01.703329 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for haproxy to listen on VIP] ************* 2025-05-29 00:55:01.703337 | orchestrator | Thursday 29 May 2025 00:54:51 +0000 (0:00:00.416) 0:07:14.281 ********** 2025-05-29 00:55:01.703398 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:55:01.703414 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:55:01.703424 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:55:01.703432 | orchestrator | 2025-05-29 00:55:01.703440 | orchestrator | RUNNING HANDLER [loadbalancer : Wait for proxysql to listen on VIP] ************ 2025-05-29 00:55:01.703448 | orchestrator | Thursday 29 May 2025 00:54:56 +0000 (0:00:05.120) 0:07:19.401 ********** 2025-05-29 00:55:01.703456 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:55:01.703464 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:55:01.703472 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:55:01.703479 | orchestrator | 2025-05-29 00:55:01.703486 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-29 00:55:01.703493 | orchestrator | testbed-node-0 : ok=127  changed=79  unreachable=0 failed=0 skipped=92  rescued=0 ignored=0 2025-05-29 00:55:01.703501 | orchestrator | testbed-node-1 : ok=126  changed=79  unreachable=0 failed=0 skipped=92  rescued=0 ignored=0 2025-05-29 00:55:01.703512 | orchestrator | testbed-node-2 : ok=126  changed=79  unreachable=0 failed=0 skipped=92  rescued=0 ignored=0 2025-05-29 00:55:01.703519 | orchestrator | 2025-05-29 00:55:01.703526 | orchestrator | 2025-05-29 00:55:01.703532 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-29 00:55:01.703540 | orchestrator | Thursday 29 May 2025 00:54:58 +0000 (0:00:01.146) 0:07:20.548 ********** 2025-05-29 00:55:01.703546 | orchestrator | =============================================================================== 2025-05-29 00:55:01.703553 | orchestrator | loadbalancer : Start backup proxysql container ------------------------- 11.87s 2025-05-29 00:55:01.703560 | orchestrator | loadbalancer : Start backup haproxy container --------------------------- 8.94s 2025-05-29 00:55:01.703567 | orchestrator | haproxy-config : Copying over skyline haproxy config -------------------- 7.99s 2025-05-29 00:55:01.703574 | orchestrator | haproxy-config : Copying over heat haproxy config ----------------------- 7.20s 2025-05-29 00:55:01.703580 | orchestrator | haproxy-config : Copying over opensearch haproxy config ----------------- 6.64s 2025-05-29 00:55:01.703587 | orchestrator | haproxy-config : Copying over nova haproxy config ----------------------- 5.48s 2025-05-29 00:55:01.703594 | orchestrator | haproxy-config : Copying over glance haproxy config --------------------- 5.44s 2025-05-29 00:55:01.703607 | orchestrator | haproxy-config : Copying over barbican haproxy config ------------------- 5.29s 2025-05-29 00:55:01.703614 | orchestrator | haproxy-config : Copying over designate haproxy config ------------------ 5.29s 2025-05-29 00:55:01.703621 | orchestrator | haproxy-config : Copying over neutron haproxy config -------------------- 5.26s 2025-05-29 00:55:01.703628 | orchestrator | haproxy-config : Copying over nova-cell:nova-novncproxy haproxy config --- 5.18s 2025-05-29 00:55:01.703639 | orchestrator | loadbalancer : Wait for haproxy to listen on VIP ------------------------ 5.12s 2025-05-29 00:55:01.703646 | orchestrator | haproxy-config : Copying over prometheus haproxy config ----------------- 4.97s 2025-05-29 00:55:01.703653 | orchestrator | haproxy-config : Copying over magnum haproxy config --------------------- 4.95s 2025-05-29 00:55:01.703660 | orchestrator | loadbalancer : Start backup keepalived container ------------------------ 4.89s 2025-05-29 00:55:01.703666 | orchestrator | haproxy-config : Copying over aodh haproxy config ----------------------- 4.73s 2025-05-29 00:55:01.703673 | orchestrator | haproxy-config : Copying over octavia haproxy config -------------------- 4.73s 2025-05-29 00:55:01.703680 | orchestrator | haproxy-config : Configuring firewall for glance ------------------------ 4.72s 2025-05-29 00:55:01.703686 | orchestrator | haproxy-config : Copying over keystone haproxy config ------------------- 4.64s 2025-05-29 00:55:01.703693 | orchestrator | loadbalancer : Copying checks for services which are enabled ------------ 4.58s 2025-05-29 00:55:04.728156 | orchestrator | 2025-05-29 00:55:04 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:55:04.728642 | orchestrator | 2025-05-29 00:55:04 | INFO  | Task 46006056-03f4-48e1-84f6-89ef7494b5c9 is in state STARTED 2025-05-29 00:55:04.729427 | orchestrator | 2025-05-29 00:55:04 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:55:04.732526 | orchestrator | 2025-05-29 00:55:04 | INFO  | Task 2efa315a-60c7-4140-b3e7-09ffa1c9cbdf is in state STARTED 2025-05-29 00:55:04.732555 | orchestrator | 2025-05-29 00:55:04 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:55:07.779268 | orchestrator | 2025-05-29 00:55:07 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:55:07.779661 | orchestrator | 2025-05-29 00:55:07 | INFO  | Task 46006056-03f4-48e1-84f6-89ef7494b5c9 is in state STARTED 2025-05-29 00:55:07.780150 | orchestrator | 2025-05-29 00:55:07 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:55:07.781004 | orchestrator | 2025-05-29 00:55:07 | INFO  | Task 2efa315a-60c7-4140-b3e7-09ffa1c9cbdf is in state STARTED 2025-05-29 00:55:07.781029 | orchestrator | 2025-05-29 00:55:07 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:55:10.817901 | orchestrator | 2025-05-29 00:55:10 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:55:10.819029 | orchestrator | 2025-05-29 00:55:10 | INFO  | Task 46006056-03f4-48e1-84f6-89ef7494b5c9 is in state STARTED 2025-05-29 00:55:10.820931 | orchestrator | 2025-05-29 00:55:10 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:55:10.821890 | orchestrator | 2025-05-29 00:55:10 | INFO  | Task 2efa315a-60c7-4140-b3e7-09ffa1c9cbdf is in state STARTED 2025-05-29 00:55:10.821924 | orchestrator | 2025-05-29 00:55:10 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:55:13.867140 | orchestrator | 2025-05-29 00:55:13 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:55:13.872279 | orchestrator | 2025-05-29 00:55:13 | INFO  | Task 46006056-03f4-48e1-84f6-89ef7494b5c9 is in state STARTED 2025-05-29 00:55:13.875403 | orchestrator | 2025-05-29 00:55:13 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:55:13.876366 | orchestrator | 2025-05-29 00:55:13 | INFO  | Task 2efa315a-60c7-4140-b3e7-09ffa1c9cbdf is in state STARTED 2025-05-29 00:55:13.878526 | orchestrator | 2025-05-29 00:55:13 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:55:16.910411 | orchestrator | 2025-05-29 00:55:16 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:55:16.912726 | orchestrator | 2025-05-29 00:55:16 | INFO  | Task 46006056-03f4-48e1-84f6-89ef7494b5c9 is in state STARTED 2025-05-29 00:55:16.915397 | orchestrator | 2025-05-29 00:55:16 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:55:16.917485 | orchestrator | 2025-05-29 00:55:16 | INFO  | Task 2efa315a-60c7-4140-b3e7-09ffa1c9cbdf is in state STARTED 2025-05-29 00:55:16.917526 | orchestrator | 2025-05-29 00:55:16 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:55:19.947126 | orchestrator | 2025-05-29 00:55:19 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:55:19.947852 | orchestrator | 2025-05-29 00:55:19 | INFO  | Task 46006056-03f4-48e1-84f6-89ef7494b5c9 is in state STARTED 2025-05-29 00:55:19.948434 | orchestrator | 2025-05-29 00:55:19 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:55:19.951473 | orchestrator | 2025-05-29 00:55:19 | INFO  | Task 2efa315a-60c7-4140-b3e7-09ffa1c9cbdf is in state STARTED 2025-05-29 00:55:19.951511 | orchestrator | 2025-05-29 00:55:19 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:55:22.995483 | orchestrator | 2025-05-29 00:55:22 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:55:22.996218 | orchestrator | 2025-05-29 00:55:22 | INFO  | Task 46006056-03f4-48e1-84f6-89ef7494b5c9 is in state STARTED 2025-05-29 00:55:22.996795 | orchestrator | 2025-05-29 00:55:22 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:55:22.998399 | orchestrator | 2025-05-29 00:55:22 | INFO  | Task 2efa315a-60c7-4140-b3e7-09ffa1c9cbdf is in state STARTED 2025-05-29 00:55:22.998425 | orchestrator | 2025-05-29 00:55:22 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:55:26.030998 | orchestrator | 2025-05-29 00:55:26 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:55:26.031110 | orchestrator | 2025-05-29 00:55:26 | INFO  | Task 46006056-03f4-48e1-84f6-89ef7494b5c9 is in state STARTED 2025-05-29 00:55:26.031386 | orchestrator | 2025-05-29 00:55:26 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:55:26.031850 | orchestrator | 2025-05-29 00:55:26 | INFO  | Task 2efa315a-60c7-4140-b3e7-09ffa1c9cbdf is in state STARTED 2025-05-29 00:55:26.031944 | orchestrator | 2025-05-29 00:55:26 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:55:29.084893 | orchestrator | 2025-05-29 00:55:29 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:55:29.085178 | orchestrator | 2025-05-29 00:55:29 | INFO  | Task 46006056-03f4-48e1-84f6-89ef7494b5c9 is in state STARTED 2025-05-29 00:55:29.086664 | orchestrator | 2025-05-29 00:55:29 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:55:29.087371 | orchestrator | 2025-05-29 00:55:29 | INFO  | Task 2efa315a-60c7-4140-b3e7-09ffa1c9cbdf is in state STARTED 2025-05-29 00:55:29.087386 | orchestrator | 2025-05-29 00:55:29 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:55:32.141753 | orchestrator | 2025-05-29 00:55:32 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:55:32.142180 | orchestrator | 2025-05-29 00:55:32 | INFO  | Task 46006056-03f4-48e1-84f6-89ef7494b5c9 is in state STARTED 2025-05-29 00:55:32.144508 | orchestrator | 2025-05-29 00:55:32 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:55:32.144612 | orchestrator | 2025-05-29 00:55:32 | INFO  | Task 2efa315a-60c7-4140-b3e7-09ffa1c9cbdf is in state STARTED 2025-05-29 00:55:32.144998 | orchestrator | 2025-05-29 00:55:32 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:55:35.189132 | orchestrator | 2025-05-29 00:55:35 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:55:35.192572 | orchestrator | 2025-05-29 00:55:35 | INFO  | Task 46006056-03f4-48e1-84f6-89ef7494b5c9 is in state STARTED 2025-05-29 00:55:35.195057 | orchestrator | 2025-05-29 00:55:35 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:55:35.198971 | orchestrator | 2025-05-29 00:55:35 | INFO  | Task 2efa315a-60c7-4140-b3e7-09ffa1c9cbdf is in state STARTED 2025-05-29 00:55:35.199039 | orchestrator | 2025-05-29 00:55:35 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:55:38.233570 | orchestrator | 2025-05-29 00:55:38 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:55:38.234064 | orchestrator | 2025-05-29 00:55:38 | INFO  | Task 46006056-03f4-48e1-84f6-89ef7494b5c9 is in state STARTED 2025-05-29 00:55:38.235208 | orchestrator | 2025-05-29 00:55:38 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:55:38.236950 | orchestrator | 2025-05-29 00:55:38 | INFO  | Task 2efa315a-60c7-4140-b3e7-09ffa1c9cbdf is in state STARTED 2025-05-29 00:55:38.236996 | orchestrator | 2025-05-29 00:55:38 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:55:41.296636 | orchestrator | 2025-05-29 00:55:41 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:55:41.297081 | orchestrator | 2025-05-29 00:55:41 | INFO  | Task 46006056-03f4-48e1-84f6-89ef7494b5c9 is in state STARTED 2025-05-29 00:55:41.299635 | orchestrator | 2025-05-29 00:55:41 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:55:41.300579 | orchestrator | 2025-05-29 00:55:41 | INFO  | Task 2efa315a-60c7-4140-b3e7-09ffa1c9cbdf is in state STARTED 2025-05-29 00:55:41.300885 | orchestrator | 2025-05-29 00:55:41 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:55:44.346774 | orchestrator | 2025-05-29 00:55:44 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:55:44.348875 | orchestrator | 2025-05-29 00:55:44 | INFO  | Task 46006056-03f4-48e1-84f6-89ef7494b5c9 is in state STARTED 2025-05-29 00:55:44.352055 | orchestrator | 2025-05-29 00:55:44 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:55:44.352107 | orchestrator | 2025-05-29 00:55:44 | INFO  | Task 2efa315a-60c7-4140-b3e7-09ffa1c9cbdf is in state STARTED 2025-05-29 00:55:44.352120 | orchestrator | 2025-05-29 00:55:44 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:55:47.397063 | orchestrator | 2025-05-29 00:55:47 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:55:47.399415 | orchestrator | 2025-05-29 00:55:47 | INFO  | Task 46006056-03f4-48e1-84f6-89ef7494b5c9 is in state STARTED 2025-05-29 00:55:47.402193 | orchestrator | 2025-05-29 00:55:47 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:55:47.406340 | orchestrator | 2025-05-29 00:55:47 | INFO  | Task 2efa315a-60c7-4140-b3e7-09ffa1c9cbdf is in state STARTED 2025-05-29 00:55:47.406380 | orchestrator | 2025-05-29 00:55:47 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:55:50.466541 | orchestrator | 2025-05-29 00:55:50 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:55:50.468199 | orchestrator | 2025-05-29 00:55:50 | INFO  | Task 46006056-03f4-48e1-84f6-89ef7494b5c9 is in state STARTED 2025-05-29 00:55:50.471612 | orchestrator | 2025-05-29 00:55:50 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:55:50.473582 | orchestrator | 2025-05-29 00:55:50 | INFO  | Task 2efa315a-60c7-4140-b3e7-09ffa1c9cbdf is in state STARTED 2025-05-29 00:55:50.473725 | orchestrator | 2025-05-29 00:55:50 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:55:53.534807 | orchestrator | 2025-05-29 00:55:53 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:55:53.538459 | orchestrator | 2025-05-29 00:55:53 | INFO  | Task 46006056-03f4-48e1-84f6-89ef7494b5c9 is in state STARTED 2025-05-29 00:55:53.545002 | orchestrator | 2025-05-29 00:55:53 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:55:53.550774 | orchestrator | 2025-05-29 00:55:53 | INFO  | Task 2efa315a-60c7-4140-b3e7-09ffa1c9cbdf is in state STARTED 2025-05-29 00:55:53.550811 | orchestrator | 2025-05-29 00:55:53 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:55:56.606906 | orchestrator | 2025-05-29 00:55:56 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:55:56.607972 | orchestrator | 2025-05-29 00:55:56 | INFO  | Task 46006056-03f4-48e1-84f6-89ef7494b5c9 is in state STARTED 2025-05-29 00:55:56.609944 | orchestrator | 2025-05-29 00:55:56 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:55:56.611945 | orchestrator | 2025-05-29 00:55:56 | INFO  | Task 2efa315a-60c7-4140-b3e7-09ffa1c9cbdf is in state STARTED 2025-05-29 00:55:56.611997 | orchestrator | 2025-05-29 00:55:56 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:55:59.660548 | orchestrator | 2025-05-29 00:55:59 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:55:59.661563 | orchestrator | 2025-05-29 00:55:59 | INFO  | Task 46006056-03f4-48e1-84f6-89ef7494b5c9 is in state STARTED 2025-05-29 00:55:59.661593 | orchestrator | 2025-05-29 00:55:59 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:55:59.664746 | orchestrator | 2025-05-29 00:55:59 | INFO  | Task 2efa315a-60c7-4140-b3e7-09ffa1c9cbdf is in state STARTED 2025-05-29 00:55:59.664789 | orchestrator | 2025-05-29 00:55:59 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:56:02.715976 | orchestrator | 2025-05-29 00:56:02 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:56:02.716299 | orchestrator | 2025-05-29 00:56:02 | INFO  | Task 46006056-03f4-48e1-84f6-89ef7494b5c9 is in state STARTED 2025-05-29 00:56:02.717212 | orchestrator | 2025-05-29 00:56:02 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:56:02.718402 | orchestrator | 2025-05-29 00:56:02 | INFO  | Task 2efa315a-60c7-4140-b3e7-09ffa1c9cbdf is in state STARTED 2025-05-29 00:56:02.718431 | orchestrator | 2025-05-29 00:56:02 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:56:05.766637 | orchestrator | 2025-05-29 00:56:05 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:56:05.766773 | orchestrator | 2025-05-29 00:56:05 | INFO  | Task 46006056-03f4-48e1-84f6-89ef7494b5c9 is in state STARTED 2025-05-29 00:56:05.768285 | orchestrator | 2025-05-29 00:56:05 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:56:05.770588 | orchestrator | 2025-05-29 00:56:05 | INFO  | Task 2efa315a-60c7-4140-b3e7-09ffa1c9cbdf is in state STARTED 2025-05-29 00:56:05.770672 | orchestrator | 2025-05-29 00:56:05 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:56:08.826480 | orchestrator | 2025-05-29 00:56:08 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:56:08.828789 | orchestrator | 2025-05-29 00:56:08 | INFO  | Task 46006056-03f4-48e1-84f6-89ef7494b5c9 is in state STARTED 2025-05-29 00:56:08.830424 | orchestrator | 2025-05-29 00:56:08 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:56:08.832520 | orchestrator | 2025-05-29 00:56:08 | INFO  | Task 2efa315a-60c7-4140-b3e7-09ffa1c9cbdf is in state STARTED 2025-05-29 00:56:08.832551 | orchestrator | 2025-05-29 00:56:08 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:56:11.887038 | orchestrator | 2025-05-29 00:56:11 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:56:11.888394 | orchestrator | 2025-05-29 00:56:11 | INFO  | Task 46006056-03f4-48e1-84f6-89ef7494b5c9 is in state STARTED 2025-05-29 00:56:11.889245 | orchestrator | 2025-05-29 00:56:11 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:56:11.891000 | orchestrator | 2025-05-29 00:56:11 | INFO  | Task 2efa315a-60c7-4140-b3e7-09ffa1c9cbdf is in state STARTED 2025-05-29 00:56:11.891031 | orchestrator | 2025-05-29 00:56:11 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:56:14.940979 | orchestrator | 2025-05-29 00:56:14 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:56:14.943357 | orchestrator | 2025-05-29 00:56:14 | INFO  | Task 46006056-03f4-48e1-84f6-89ef7494b5c9 is in state STARTED 2025-05-29 00:56:14.945079 | orchestrator | 2025-05-29 00:56:14 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:56:14.947158 | orchestrator | 2025-05-29 00:56:14 | INFO  | Task 2efa315a-60c7-4140-b3e7-09ffa1c9cbdf is in state STARTED 2025-05-29 00:56:14.947246 | orchestrator | 2025-05-29 00:56:14 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:56:17.998949 | orchestrator | 2025-05-29 00:56:17 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:56:18.000498 | orchestrator | 2025-05-29 00:56:17 | INFO  | Task 46006056-03f4-48e1-84f6-89ef7494b5c9 is in state STARTED 2025-05-29 00:56:18.002441 | orchestrator | 2025-05-29 00:56:18 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:56:18.004774 | orchestrator | 2025-05-29 00:56:18 | INFO  | Task 2efa315a-60c7-4140-b3e7-09ffa1c9cbdf is in state STARTED 2025-05-29 00:56:18.004804 | orchestrator | 2025-05-29 00:56:18 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:56:21.058585 | orchestrator | 2025-05-29 00:56:21 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:56:21.059331 | orchestrator | 2025-05-29 00:56:21 | INFO  | Task 46006056-03f4-48e1-84f6-89ef7494b5c9 is in state STARTED 2025-05-29 00:56:21.061554 | orchestrator | 2025-05-29 00:56:21 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:56:21.064632 | orchestrator | 2025-05-29 00:56:21 | INFO  | Task 2efa315a-60c7-4140-b3e7-09ffa1c9cbdf is in state STARTED 2025-05-29 00:56:21.065111 | orchestrator | 2025-05-29 00:56:21 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:56:24.123252 | orchestrator | 2025-05-29 00:56:24 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:56:24.126638 | orchestrator | 2025-05-29 00:56:24 | INFO  | Task 46006056-03f4-48e1-84f6-89ef7494b5c9 is in state STARTED 2025-05-29 00:56:24.128767 | orchestrator | 2025-05-29 00:56:24 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:56:24.130409 | orchestrator | 2025-05-29 00:56:24 | INFO  | Task 2efa315a-60c7-4140-b3e7-09ffa1c9cbdf is in state STARTED 2025-05-29 00:56:24.130813 | orchestrator | 2025-05-29 00:56:24 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:56:27.194444 | orchestrator | 2025-05-29 00:56:27 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:56:27.196419 | orchestrator | 2025-05-29 00:56:27 | INFO  | Task 46006056-03f4-48e1-84f6-89ef7494b5c9 is in state STARTED 2025-05-29 00:56:27.198283 | orchestrator | 2025-05-29 00:56:27 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:56:27.199987 | orchestrator | 2025-05-29 00:56:27 | INFO  | Task 2efa315a-60c7-4140-b3e7-09ffa1c9cbdf is in state STARTED 2025-05-29 00:56:27.200412 | orchestrator | 2025-05-29 00:56:27 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:56:30.260525 | orchestrator | 2025-05-29 00:56:30 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:56:30.263036 | orchestrator | 2025-05-29 00:56:30 | INFO  | Task 46006056-03f4-48e1-84f6-89ef7494b5c9 is in state STARTED 2025-05-29 00:56:30.266679 | orchestrator | 2025-05-29 00:56:30 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:56:30.266813 | orchestrator | 2025-05-29 00:56:30 | INFO  | Task 2efa315a-60c7-4140-b3e7-09ffa1c9cbdf is in state STARTED 2025-05-29 00:56:30.266833 | orchestrator | 2025-05-29 00:56:30 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:56:33.305532 | orchestrator | 2025-05-29 00:56:33 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:56:33.305956 | orchestrator | 2025-05-29 00:56:33 | INFO  | Task 46006056-03f4-48e1-84f6-89ef7494b5c9 is in state STARTED 2025-05-29 00:56:33.308366 | orchestrator | 2025-05-29 00:56:33 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:56:33.309890 | orchestrator | 2025-05-29 00:56:33 | INFO  | Task 2efa315a-60c7-4140-b3e7-09ffa1c9cbdf is in state STARTED 2025-05-29 00:56:33.309940 | orchestrator | 2025-05-29 00:56:33 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:56:36.365626 | orchestrator | 2025-05-29 00:56:36 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:56:36.366688 | orchestrator | 2025-05-29 00:56:36 | INFO  | Task 46006056-03f4-48e1-84f6-89ef7494b5c9 is in state STARTED 2025-05-29 00:56:36.368203 | orchestrator | 2025-05-29 00:56:36 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:56:36.370251 | orchestrator | 2025-05-29 00:56:36 | INFO  | Task 2efa315a-60c7-4140-b3e7-09ffa1c9cbdf is in state STARTED 2025-05-29 00:56:36.370277 | orchestrator | 2025-05-29 00:56:36 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:56:39.419347 | orchestrator | 2025-05-29 00:56:39 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:56:39.420252 | orchestrator | 2025-05-29 00:56:39 | INFO  | Task 46006056-03f4-48e1-84f6-89ef7494b5c9 is in state STARTED 2025-05-29 00:56:39.422342 | orchestrator | 2025-05-29 00:56:39 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:56:39.423967 | orchestrator | 2025-05-29 00:56:39 | INFO  | Task 2efa315a-60c7-4140-b3e7-09ffa1c9cbdf is in state STARTED 2025-05-29 00:56:39.423993 | orchestrator | 2025-05-29 00:56:39 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:56:42.463737 | orchestrator | 2025-05-29 00:56:42 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:56:42.464428 | orchestrator | 2025-05-29 00:56:42 | INFO  | Task 46006056-03f4-48e1-84f6-89ef7494b5c9 is in state STARTED 2025-05-29 00:56:42.465777 | orchestrator | 2025-05-29 00:56:42 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:56:42.467179 | orchestrator | 2025-05-29 00:56:42 | INFO  | Task 2efa315a-60c7-4140-b3e7-09ffa1c9cbdf is in state STARTED 2025-05-29 00:56:42.467211 | orchestrator | 2025-05-29 00:56:42 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:56:45.513379 | orchestrator | 2025-05-29 00:56:45 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:56:45.514131 | orchestrator | 2025-05-29 00:56:45 | INFO  | Task 46006056-03f4-48e1-84f6-89ef7494b5c9 is in state STARTED 2025-05-29 00:56:45.521000 | orchestrator | 2025-05-29 00:56:45 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:56:45.522843 | orchestrator | 2025-05-29 00:56:45 | INFO  | Task 2efa315a-60c7-4140-b3e7-09ffa1c9cbdf is in state STARTED 2025-05-29 00:56:45.522874 | orchestrator | 2025-05-29 00:56:45 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:56:48.573545 | orchestrator | 2025-05-29 00:56:48 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:56:48.575094 | orchestrator | 2025-05-29 00:56:48 | INFO  | Task 46006056-03f4-48e1-84f6-89ef7494b5c9 is in state STARTED 2025-05-29 00:56:48.576755 | orchestrator | 2025-05-29 00:56:48 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:56:48.579170 | orchestrator | 2025-05-29 00:56:48 | INFO  | Task 2efa315a-60c7-4140-b3e7-09ffa1c9cbdf is in state STARTED 2025-05-29 00:56:48.579208 | orchestrator | 2025-05-29 00:56:48 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:56:51.623413 | orchestrator | 2025-05-29 00:56:51 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:56:51.625051 | orchestrator | 2025-05-29 00:56:51 | INFO  | Task 46006056-03f4-48e1-84f6-89ef7494b5c9 is in state STARTED 2025-05-29 00:56:51.628896 | orchestrator | 2025-05-29 00:56:51 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:56:51.630413 | orchestrator | 2025-05-29 00:56:51 | INFO  | Task 2efa315a-60c7-4140-b3e7-09ffa1c9cbdf is in state STARTED 2025-05-29 00:56:51.630430 | orchestrator | 2025-05-29 00:56:51 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:56:54.682235 | orchestrator | 2025-05-29 00:56:54 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:56:54.683844 | orchestrator | 2025-05-29 00:56:54 | INFO  | Task 46006056-03f4-48e1-84f6-89ef7494b5c9 is in state STARTED 2025-05-29 00:56:54.685847 | orchestrator | 2025-05-29 00:56:54 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:56:54.692176 | orchestrator | 2025-05-29 00:56:54 | INFO  | Task 2efa315a-60c7-4140-b3e7-09ffa1c9cbdf is in state STARTED 2025-05-29 00:56:54.692248 | orchestrator | 2025-05-29 00:56:54 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:56:57.738875 | orchestrator | 2025-05-29 00:56:57 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:56:57.741159 | orchestrator | 2025-05-29 00:56:57 | INFO  | Task 46006056-03f4-48e1-84f6-89ef7494b5c9 is in state STARTED 2025-05-29 00:56:57.742520 | orchestrator | 2025-05-29 00:56:57 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:56:57.744192 | orchestrator | 2025-05-29 00:56:57 | INFO  | Task 2efa315a-60c7-4140-b3e7-09ffa1c9cbdf is in state STARTED 2025-05-29 00:56:57.744252 | orchestrator | 2025-05-29 00:56:57 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:57:00.791559 | orchestrator | 2025-05-29 00:57:00 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:57:00.792116 | orchestrator | 2025-05-29 00:57:00 | INFO  | Task 46006056-03f4-48e1-84f6-89ef7494b5c9 is in state STARTED 2025-05-29 00:57:00.793041 | orchestrator | 2025-05-29 00:57:00 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:57:00.794081 | orchestrator | 2025-05-29 00:57:00 | INFO  | Task 2efa315a-60c7-4140-b3e7-09ffa1c9cbdf is in state STARTED 2025-05-29 00:57:00.794144 | orchestrator | 2025-05-29 00:57:00 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:57:03.842346 | orchestrator | 2025-05-29 00:57:03 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:57:03.843783 | orchestrator | 2025-05-29 00:57:03 | INFO  | Task 46006056-03f4-48e1-84f6-89ef7494b5c9 is in state STARTED 2025-05-29 00:57:03.845493 | orchestrator | 2025-05-29 00:57:03 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:57:03.847909 | orchestrator | 2025-05-29 00:57:03 | INFO  | Task 2efa315a-60c7-4140-b3e7-09ffa1c9cbdf is in state STARTED 2025-05-29 00:57:03.847941 | orchestrator | 2025-05-29 00:57:03 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:57:06.904032 | orchestrator | 2025-05-29 00:57:06 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:57:06.904998 | orchestrator | 2025-05-29 00:57:06 | INFO  | Task 46006056-03f4-48e1-84f6-89ef7494b5c9 is in state STARTED 2025-05-29 00:57:06.906843 | orchestrator | 2025-05-29 00:57:06 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:57:06.908403 | orchestrator | 2025-05-29 00:57:06 | INFO  | Task 2efa315a-60c7-4140-b3e7-09ffa1c9cbdf is in state STARTED 2025-05-29 00:57:06.908428 | orchestrator | 2025-05-29 00:57:06 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:57:09.964883 | orchestrator | 2025-05-29 00:57:09 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:57:09.965897 | orchestrator | 2025-05-29 00:57:09 | INFO  | Task 46006056-03f4-48e1-84f6-89ef7494b5c9 is in state STARTED 2025-05-29 00:57:09.967863 | orchestrator | 2025-05-29 00:57:09 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:57:09.969191 | orchestrator | 2025-05-29 00:57:09 | INFO  | Task 2efa315a-60c7-4140-b3e7-09ffa1c9cbdf is in state STARTED 2025-05-29 00:57:09.969635 | orchestrator | 2025-05-29 00:57:09 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:57:13.013315 | orchestrator | 2025-05-29 00:57:13 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:57:13.017801 | orchestrator | 2025-05-29 00:57:13 | INFO  | Task 46006056-03f4-48e1-84f6-89ef7494b5c9 is in state STARTED 2025-05-29 00:57:13.017840 | orchestrator | 2025-05-29 00:57:13 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:57:13.019068 | orchestrator | 2025-05-29 00:57:13 | INFO  | Task 2efa315a-60c7-4140-b3e7-09ffa1c9cbdf is in state STARTED 2025-05-29 00:57:13.019327 | orchestrator | 2025-05-29 00:57:13 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:57:16.063920 | orchestrator | 2025-05-29 00:57:16 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:57:16.064768 | orchestrator | 2025-05-29 00:57:16 | INFO  | Task 46006056-03f4-48e1-84f6-89ef7494b5c9 is in state SUCCESS 2025-05-29 00:57:16.066633 | orchestrator | 2025-05-29 00:57:16.066701 | orchestrator | 2025-05-29 00:57:16.066715 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-29 00:57:16.066728 | orchestrator | 2025-05-29 00:57:16.066739 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-29 00:57:16.066751 | orchestrator | Thursday 29 May 2025 00:55:01 +0000 (0:00:00.311) 0:00:00.311 ********** 2025-05-29 00:57:16.066762 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:57:16.066827 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:57:16.066843 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:57:16.066854 | orchestrator | 2025-05-29 00:57:16.066897 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-29 00:57:16.066909 | orchestrator | Thursday 29 May 2025 00:55:02 +0000 (0:00:00.403) 0:00:00.714 ********** 2025-05-29 00:57:16.066921 | orchestrator | ok: [testbed-node-0] => (item=enable_opensearch_True) 2025-05-29 00:57:16.066933 | orchestrator | ok: [testbed-node-1] => (item=enable_opensearch_True) 2025-05-29 00:57:16.066944 | orchestrator | ok: [testbed-node-2] => (item=enable_opensearch_True) 2025-05-29 00:57:16.066955 | orchestrator | 2025-05-29 00:57:16.066966 | orchestrator | PLAY [Apply role opensearch] *************************************************** 2025-05-29 00:57:16.066977 | orchestrator | 2025-05-29 00:57:16.066988 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-05-29 00:57:16.067000 | orchestrator | Thursday 29 May 2025 00:55:02 +0000 (0:00:00.287) 0:00:01.002 ********** 2025-05-29 00:57:16.067041 | orchestrator | included: /ansible/roles/opensearch/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-29 00:57:16.067053 | orchestrator | 2025-05-29 00:57:16.067064 | orchestrator | TASK [opensearch : Setting sysctl values] ************************************** 2025-05-29 00:57:16.067075 | orchestrator | Thursday 29 May 2025 00:55:03 +0000 (0:00:00.712) 0:00:01.715 ********** 2025-05-29 00:57:16.067086 | orchestrator | changed: [testbed-node-0] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-05-29 00:57:16.067097 | orchestrator | changed: [testbed-node-1] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-05-29 00:57:16.067108 | orchestrator | changed: [testbed-node-2] => (item={'name': 'vm.max_map_count', 'value': 262144}) 2025-05-29 00:57:16.067119 | orchestrator | 2025-05-29 00:57:16.067130 | orchestrator | TASK [opensearch : Ensuring config directories exist] ************************** 2025-05-29 00:57:16.067140 | orchestrator | Thursday 29 May 2025 00:55:04 +0000 (0:00:00.807) 0:00:02.522 ********** 2025-05-29 00:57:16.067169 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-29 00:57:16.067186 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-29 00:57:16.067222 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-29 00:57:16.067238 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-29 00:57:16.067289 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-29 00:57:16.067316 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-29 00:57:16.067342 | orchestrator | 2025-05-29 00:57:16.067356 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-05-29 00:57:16.067370 | orchestrator | Thursday 29 May 2025 00:55:05 +0000 (0:00:01.556) 0:00:04.078 ********** 2025-05-29 00:57:16.067382 | orchestrator | included: /ansible/roles/opensearch/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-29 00:57:16.067394 | orchestrator | 2025-05-29 00:57:16.067405 | orchestrator | TASK [service-cert-copy : opensearch | Copying over extra CA certificates] ***** 2025-05-29 00:57:16.067415 | orchestrator | Thursday 29 May 2025 00:55:06 +0000 (0:00:00.781) 0:00:04.860 ********** 2025-05-29 00:57:16.067436 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-29 00:57:16.067449 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-29 00:57:16.067466 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-29 00:57:16.067478 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-29 00:57:16.067505 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-29 00:57:16.067519 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-29 00:57:16.067531 | orchestrator | 2025-05-29 00:57:16.067542 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS certificate] *** 2025-05-29 00:57:16.067553 | orchestrator | Thursday 29 May 2025 00:55:09 +0000 (0:00:03.366) 0:00:08.226 ********** 2025-05-29 00:57:16.067570 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-05-29 00:57:16.067582 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-05-29 00:57:16.067601 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:57:16.067619 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-05-29 00:57:16.067632 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-05-29 00:57:16.067644 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:57:16.067660 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-05-29 00:57:16.067672 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-05-29 00:57:16.067691 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:57:16.067702 | orchestrator | 2025-05-29 00:57:16.067713 | orchestrator | TASK [service-cert-copy : opensearch | Copying over backend internal TLS key] *** 2025-05-29 00:57:16.067724 | orchestrator | Thursday 29 May 2025 00:55:10 +0000 (0:00:01.039) 0:00:09.266 ********** 2025-05-29 00:57:16.067742 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-05-29 00:57:16.067755 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-05-29 00:57:16.067767 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:57:16.067784 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-05-29 00:57:16.067803 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-05-29 00:57:16.067814 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:57:16.067831 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}})  2025-05-29 00:57:16.067844 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}})  2025-05-29 00:57:16.067855 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:57:16.067866 | orchestrator | 2025-05-29 00:57:16.067878 | orchestrator | TASK [opensearch : Copying over config.json files for services] **************** 2025-05-29 00:57:16.067889 | orchestrator | Thursday 29 May 2025 00:55:11 +0000 (0:00:01.058) 0:00:10.324 ********** 2025-05-29 00:57:16.067905 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-29 00:57:16.067930 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-29 00:57:16.067942 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-29 00:57:16.067960 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-29 00:57:16.067978 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-29 00:57:16.067997 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-29 00:57:16.068009 | orchestrator | 2025-05-29 00:57:16.068020 | orchestrator | TASK [opensearch : Copying over opensearch service config file] **************** 2025-05-29 00:57:16.068031 | orchestrator | Thursday 29 May 2025 00:55:14 +0000 (0:00:02.440) 0:00:12.764 ********** 2025-05-29 00:57:16.068043 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:57:16.068054 | orchestrator | changed: [testbed-node-1] 2025-05-29 00:57:16.068065 | orchestrator | changed: [testbed-node-2] 2025-05-29 00:57:16.068076 | orchestrator | 2025-05-29 00:57:16.068087 | orchestrator | TASK [opensearch : Copying over opensearch-dashboards config file] ************* 2025-05-29 00:57:16.068097 | orchestrator | Thursday 29 May 2025 00:55:17 +0000 (0:00:03.373) 0:00:16.137 ********** 2025-05-29 00:57:16.068108 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:57:16.068119 | orchestrator | changed: [testbed-node-1] 2025-05-29 00:57:16.068129 | orchestrator | changed: [testbed-node-2] 2025-05-29 00:57:16.068140 | orchestrator | 2025-05-29 00:57:16.068151 | orchestrator | TASK [opensearch : Check opensearch containers] ******************************** 2025-05-29 00:57:16.068162 | orchestrator | Thursday 29 May 2025 00:55:19 +0000 (0:00:01.655) 0:00:17.792 ********** 2025-05-29 00:57:16.068450 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-29 00:57:16.068473 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-29 00:57:16.068501 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch', 'value': {'container_name': 'opensearch', 'group': 'opensearch', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/opensearch:2.18.0.20241206', 'environment': {'OPENSEARCH_JAVA_OPTS': '-Xms1g -Xmx1g -Dlog4j2.formatMsgNoLookups=true'}, 'volumes': ['/etc/kolla/opensearch/:/var/lib/kolla/config_files/', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'opensearch:/var/lib/opensearch/data', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9200'], 'timeout': '30'}, 'haproxy': {'opensearch': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9200', 'frontend_http_extra': ['option dontlog-normal']}}}}) 2025-05-29 00:57:16.068514 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-29 00:57:16.068534 | orchestrator | changed: [testbed-node-1] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-29 00:57:16.068547 | orchestrator | changed: [testbed-node-2] => (item={'key': 'opensearch-dashboards', 'value': {'container_name': 'opensearch_dashboards', 'group': 'opensearch-dashboards', 'enabled': True, 'environment': {'OPENSEARCH_DASHBOARDS_SECURITY_PLUGIN': 'False'}, 'image': 'registry.osism.tech/kolla/release/opensearch-dashboards:2.18.0.20241206', 'volumes': ['/etc/kolla/opensearch-dashboards/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5601'], 'timeout': '30'}, 'haproxy': {'opensearch-dashboards': {'enabled': True, 'mode': 'http', 'external': False, 'port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}, 'opensearch_dashboards_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '5601', 'listen_port': '5601', 'auth_user': 'opensearch', 'auth_pass': 'password'}}}}) 2025-05-29 00:57:16.068566 | orchestrator | 2025-05-29 00:57:16.068578 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-05-29 00:57:16.068589 | orchestrator | Thursday 29 May 2025 00:55:22 +0000 (0:00:02.632) 0:00:20.425 ********** 2025-05-29 00:57:16.068600 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:57:16.068611 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:57:16.068622 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:57:16.068633 | orchestrator | 2025-05-29 00:57:16.068644 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-05-29 00:57:16.068659 | orchestrator | Thursday 29 May 2025 00:55:22 +0000 (0:00:00.299) 0:00:20.724 ********** 2025-05-29 00:57:16.068671 | orchestrator | 2025-05-29 00:57:16.068682 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-05-29 00:57:16.068693 | orchestrator | Thursday 29 May 2025 00:55:22 +0000 (0:00:00.400) 0:00:21.124 ********** 2025-05-29 00:57:16.068703 | orchestrator | 2025-05-29 00:57:16.068714 | orchestrator | TASK [opensearch : Flush handlers] ********************************************* 2025-05-29 00:57:16.068725 | orchestrator | Thursday 29 May 2025 00:55:22 +0000 (0:00:00.122) 0:00:21.247 ********** 2025-05-29 00:57:16.068736 | orchestrator | 2025-05-29 00:57:16.068747 | orchestrator | RUNNING HANDLER [opensearch : Disable shard allocation] ************************ 2025-05-29 00:57:16.068758 | orchestrator | Thursday 29 May 2025 00:55:23 +0000 (0:00:00.130) 0:00:21.377 ********** 2025-05-29 00:57:16.068769 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:57:16.068779 | orchestrator | 2025-05-29 00:57:16.068814 | orchestrator | RUNNING HANDLER [opensearch : Perform a flush] ********************************* 2025-05-29 00:57:16.068846 | orchestrator | Thursday 29 May 2025 00:55:23 +0000 (0:00:00.295) 0:00:21.673 ********** 2025-05-29 00:57:16.068858 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:57:16.068869 | orchestrator | 2025-05-29 00:57:16.068880 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch container] ******************** 2025-05-29 00:57:16.068890 | orchestrator | Thursday 29 May 2025 00:55:23 +0000 (0:00:00.516) 0:00:22.190 ********** 2025-05-29 00:57:16.068901 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:57:16.068912 | orchestrator | changed: [testbed-node-2] 2025-05-29 00:57:16.068923 | orchestrator | changed: [testbed-node-1] 2025-05-29 00:57:16.068933 | orchestrator | 2025-05-29 00:57:16.068944 | orchestrator | RUNNING HANDLER [opensearch : Restart opensearch-dashboards container] ********* 2025-05-29 00:57:16.068955 | orchestrator | Thursday 29 May 2025 00:55:59 +0000 (0:00:35.366) 0:00:57.556 ********** 2025-05-29 00:57:16.068966 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:57:16.068977 | orchestrator | changed: [testbed-node-1] 2025-05-29 00:57:16.068987 | orchestrator | changed: [testbed-node-2] 2025-05-29 00:57:16.068998 | orchestrator | 2025-05-29 00:57:16.069009 | orchestrator | TASK [opensearch : include_tasks] ********************************************** 2025-05-29 00:57:16.069020 | orchestrator | Thursday 29 May 2025 00:57:00 +0000 (0:01:01.783) 0:01:59.340 ********** 2025-05-29 00:57:16.069031 | orchestrator | included: /ansible/roles/opensearch/tasks/post-config.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-29 00:57:16.069042 | orchestrator | 2025-05-29 00:57:16.069053 | orchestrator | TASK [opensearch : Wait for OpenSearch to become ready] ************************ 2025-05-29 00:57:16.069066 | orchestrator | Thursday 29 May 2025 00:57:01 +0000 (0:00:00.700) 0:02:00.040 ********** 2025-05-29 00:57:16.069079 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:57:16.069092 | orchestrator | 2025-05-29 00:57:16.069105 | orchestrator | TASK [opensearch : Check if a log retention policy exists] ********************* 2025-05-29 00:57:16.069118 | orchestrator | Thursday 29 May 2025 00:57:04 +0000 (0:00:02.691) 0:02:02.732 ********** 2025-05-29 00:57:16.069130 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:57:16.069143 | orchestrator | 2025-05-29 00:57:16.069154 | orchestrator | TASK [opensearch : Create new log retention policy] **************************** 2025-05-29 00:57:16.069165 | orchestrator | Thursday 29 May 2025 00:57:06 +0000 (0:00:02.597) 0:02:05.330 ********** 2025-05-29 00:57:16.069183 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:57:16.069194 | orchestrator | 2025-05-29 00:57:16.069205 | orchestrator | TASK [opensearch : Apply retention policy to existing indices] ***************** 2025-05-29 00:57:16.069216 | orchestrator | Thursday 29 May 2025 00:57:10 +0000 (0:00:03.072) 0:02:08.403 ********** 2025-05-29 00:57:16.069227 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:57:16.069238 | orchestrator | 2025-05-29 00:57:16.069255 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-29 00:57:16.069327 | orchestrator | testbed-node-0 : ok=18  changed=11  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-29 00:57:16.069341 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-29 00:57:16.069352 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=3  rescued=0 ignored=0 2025-05-29 00:57:16.069363 | orchestrator | 2025-05-29 00:57:16.069374 | orchestrator | 2025-05-29 00:57:16.069385 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-29 00:57:16.069396 | orchestrator | Thursday 29 May 2025 00:57:12 +0000 (0:00:02.892) 0:02:11.296 ********** 2025-05-29 00:57:16.069407 | orchestrator | =============================================================================== 2025-05-29 00:57:16.069416 | orchestrator | opensearch : Restart opensearch-dashboards container ------------------- 61.78s 2025-05-29 00:57:16.069426 | orchestrator | opensearch : Restart opensearch container ------------------------------ 35.37s 2025-05-29 00:57:16.069436 | orchestrator | opensearch : Copying over opensearch service config file ---------------- 3.37s 2025-05-29 00:57:16.069445 | orchestrator | service-cert-copy : opensearch | Copying over extra CA certificates ----- 3.37s 2025-05-29 00:57:16.069455 | orchestrator | opensearch : Create new log retention policy ---------------------------- 3.07s 2025-05-29 00:57:16.069464 | orchestrator | opensearch : Apply retention policy to existing indices ----------------- 2.89s 2025-05-29 00:57:16.069474 | orchestrator | opensearch : Wait for OpenSearch to become ready ------------------------ 2.69s 2025-05-29 00:57:16.069483 | orchestrator | opensearch : Check opensearch containers -------------------------------- 2.63s 2025-05-29 00:57:16.069493 | orchestrator | opensearch : Check if a log retention policy exists --------------------- 2.60s 2025-05-29 00:57:16.069502 | orchestrator | opensearch : Copying over config.json files for services ---------------- 2.44s 2025-05-29 00:57:16.069511 | orchestrator | opensearch : Copying over opensearch-dashboards config file ------------- 1.66s 2025-05-29 00:57:16.069526 | orchestrator | opensearch : Ensuring config directories exist -------------------------- 1.56s 2025-05-29 00:57:16.069536 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS key --- 1.06s 2025-05-29 00:57:16.069546 | orchestrator | service-cert-copy : opensearch | Copying over backend internal TLS certificate --- 1.04s 2025-05-29 00:57:16.069555 | orchestrator | opensearch : Setting sysctl values -------------------------------------- 0.81s 2025-05-29 00:57:16.069565 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.78s 2025-05-29 00:57:16.069574 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.71s 2025-05-29 00:57:16.069584 | orchestrator | opensearch : include_tasks ---------------------------------------------- 0.70s 2025-05-29 00:57:16.069593 | orchestrator | opensearch : Flush handlers --------------------------------------------- 0.65s 2025-05-29 00:57:16.069603 | orchestrator | opensearch : Perform a flush -------------------------------------------- 0.52s 2025-05-29 00:57:16.069613 | orchestrator | 2025-05-29 00:57:16 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:57:16.069623 | orchestrator | 2025-05-29 00:57:16 | INFO  | Task 2efa315a-60c7-4140-b3e7-09ffa1c9cbdf is in state STARTED 2025-05-29 00:57:16.069632 | orchestrator | 2025-05-29 00:57:16 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:57:19.110846 | orchestrator | 2025-05-29 00:57:19 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:57:19.116002 | orchestrator | 2025-05-29 00:57:19 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:57:19.116061 | orchestrator | 2025-05-29 00:57:19 | INFO  | Task 2efa315a-60c7-4140-b3e7-09ffa1c9cbdf is in state STARTED 2025-05-29 00:57:19.116074 | orchestrator | 2025-05-29 00:57:19 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:57:22.166885 | orchestrator | 2025-05-29 00:57:22 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:57:22.167850 | orchestrator | 2025-05-29 00:57:22 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:57:22.169849 | orchestrator | 2025-05-29 00:57:22 | INFO  | Task 2efa315a-60c7-4140-b3e7-09ffa1c9cbdf is in state STARTED 2025-05-29 00:57:22.169931 | orchestrator | 2025-05-29 00:57:22 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:57:25.220310 | orchestrator | 2025-05-29 00:57:25 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:57:25.220418 | orchestrator | 2025-05-29 00:57:25 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:57:25.221493 | orchestrator | 2025-05-29 00:57:25 | INFO  | Task 2efa315a-60c7-4140-b3e7-09ffa1c9cbdf is in state STARTED 2025-05-29 00:57:25.221639 | orchestrator | 2025-05-29 00:57:25 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:57:28.271408 | orchestrator | 2025-05-29 00:57:28 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:57:28.273236 | orchestrator | 2025-05-29 00:57:28 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:57:28.276774 | orchestrator | 2025-05-29 00:57:28 | INFO  | Task 2efa315a-60c7-4140-b3e7-09ffa1c9cbdf is in state STARTED 2025-05-29 00:57:28.276812 | orchestrator | 2025-05-29 00:57:28 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:57:31.324405 | orchestrator | 2025-05-29 00:57:31 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:57:31.325595 | orchestrator | 2025-05-29 00:57:31 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:57:31.327365 | orchestrator | 2025-05-29 00:57:31 | INFO  | Task 2efa315a-60c7-4140-b3e7-09ffa1c9cbdf is in state STARTED 2025-05-29 00:57:31.327433 | orchestrator | 2025-05-29 00:57:31 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:57:34.371900 | orchestrator | 2025-05-29 00:57:34 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:57:34.372207 | orchestrator | 2025-05-29 00:57:34 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:57:34.373910 | orchestrator | 2025-05-29 00:57:34 | INFO  | Task 2efa315a-60c7-4140-b3e7-09ffa1c9cbdf is in state STARTED 2025-05-29 00:57:34.374010 | orchestrator | 2025-05-29 00:57:34 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:57:37.414124 | orchestrator | 2025-05-29 00:57:37 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:57:37.414222 | orchestrator | 2025-05-29 00:57:37 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:57:37.414316 | orchestrator | 2025-05-29 00:57:37 | INFO  | Task 2efa315a-60c7-4140-b3e7-09ffa1c9cbdf is in state STARTED 2025-05-29 00:57:37.414337 | orchestrator | 2025-05-29 00:57:37 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:57:40.468321 | orchestrator | 2025-05-29 00:57:40 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:57:40.471151 | orchestrator | 2025-05-29 00:57:40 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:57:40.473408 | orchestrator | 2025-05-29 00:57:40 | INFO  | Task 2efa315a-60c7-4140-b3e7-09ffa1c9cbdf is in state STARTED 2025-05-29 00:57:40.473879 | orchestrator | 2025-05-29 00:57:40 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:57:43.518873 | orchestrator | 2025-05-29 00:57:43 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:57:43.520367 | orchestrator | 2025-05-29 00:57:43 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:57:43.522529 | orchestrator | 2025-05-29 00:57:43 | INFO  | Task 2efa315a-60c7-4140-b3e7-09ffa1c9cbdf is in state STARTED 2025-05-29 00:57:43.522567 | orchestrator | 2025-05-29 00:57:43 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:57:46.567526 | orchestrator | 2025-05-29 00:57:46 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:57:46.568742 | orchestrator | 2025-05-29 00:57:46 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:57:46.570342 | orchestrator | 2025-05-29 00:57:46 | INFO  | Task 2efa315a-60c7-4140-b3e7-09ffa1c9cbdf is in state STARTED 2025-05-29 00:57:46.570377 | orchestrator | 2025-05-29 00:57:46 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:57:49.614404 | orchestrator | 2025-05-29 00:57:49 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:57:49.616543 | orchestrator | 2025-05-29 00:57:49 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:57:49.617996 | orchestrator | 2025-05-29 00:57:49 | INFO  | Task 2efa315a-60c7-4140-b3e7-09ffa1c9cbdf is in state STARTED 2025-05-29 00:57:49.618107 | orchestrator | 2025-05-29 00:57:49 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:57:52.666687 | orchestrator | 2025-05-29 00:57:52 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:57:52.667420 | orchestrator | 2025-05-29 00:57:52 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:57:52.668751 | orchestrator | 2025-05-29 00:57:52 | INFO  | Task 2efa315a-60c7-4140-b3e7-09ffa1c9cbdf is in state STARTED 2025-05-29 00:57:52.668781 | orchestrator | 2025-05-29 00:57:52 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:57:55.720782 | orchestrator | 2025-05-29 00:57:55 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:57:55.722601 | orchestrator | 2025-05-29 00:57:55 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:57:55.724024 | orchestrator | 2025-05-29 00:57:55 | INFO  | Task 2efa315a-60c7-4140-b3e7-09ffa1c9cbdf is in state STARTED 2025-05-29 00:57:55.724054 | orchestrator | 2025-05-29 00:57:55 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:57:58.777562 | orchestrator | 2025-05-29 00:57:58 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:57:58.777662 | orchestrator | 2025-05-29 00:57:58 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:57:58.778518 | orchestrator | 2025-05-29 00:57:58 | INFO  | Task 2efa315a-60c7-4140-b3e7-09ffa1c9cbdf is in state STARTED 2025-05-29 00:57:58.778551 | orchestrator | 2025-05-29 00:57:58 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:58:01.838847 | orchestrator | 2025-05-29 00:58:01 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:58:01.839084 | orchestrator | 2025-05-29 00:58:01 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:58:01.840269 | orchestrator | 2025-05-29 00:58:01 | INFO  | Task 2efa315a-60c7-4140-b3e7-09ffa1c9cbdf is in state STARTED 2025-05-29 00:58:01.840292 | orchestrator | 2025-05-29 00:58:01 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:58:04.890767 | orchestrator | 2025-05-29 00:58:04 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:58:04.891919 | orchestrator | 2025-05-29 00:58:04 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:58:04.893955 | orchestrator | 2025-05-29 00:58:04 | INFO  | Task 2efa315a-60c7-4140-b3e7-09ffa1c9cbdf is in state STARTED 2025-05-29 00:58:04.893984 | orchestrator | 2025-05-29 00:58:04 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:58:07.947703 | orchestrator | 2025-05-29 00:58:07 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:58:07.951588 | orchestrator | 2025-05-29 00:58:07 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:58:07.952493 | orchestrator | 2025-05-29 00:58:07 | INFO  | Task 2efa315a-60c7-4140-b3e7-09ffa1c9cbdf is in state STARTED 2025-05-29 00:58:07.952520 | orchestrator | 2025-05-29 00:58:07 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:58:10.992525 | orchestrator | 2025-05-29 00:58:10 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:58:10.992649 | orchestrator | 2025-05-29 00:58:10 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:58:10.992665 | orchestrator | 2025-05-29 00:58:10 | INFO  | Task 2efa315a-60c7-4140-b3e7-09ffa1c9cbdf is in state STARTED 2025-05-29 00:58:10.993842 | orchestrator | 2025-05-29 00:58:10 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:58:14.048009 | orchestrator | 2025-05-29 00:58:14 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:58:14.049905 | orchestrator | 2025-05-29 00:58:14 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:58:14.052149 | orchestrator | 2025-05-29 00:58:14 | INFO  | Task 2efa315a-60c7-4140-b3e7-09ffa1c9cbdf is in state STARTED 2025-05-29 00:58:14.052375 | orchestrator | 2025-05-29 00:58:14 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:58:17.098708 | orchestrator | 2025-05-29 00:58:17 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:58:17.099824 | orchestrator | 2025-05-29 00:58:17 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:58:17.102058 | orchestrator | 2025-05-29 00:58:17 | INFO  | Task 2efa315a-60c7-4140-b3e7-09ffa1c9cbdf is in state STARTED 2025-05-29 00:58:17.102092 | orchestrator | 2025-05-29 00:58:17 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:58:20.149178 | orchestrator | 2025-05-29 00:58:20 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state STARTED 2025-05-29 00:58:20.151126 | orchestrator | 2025-05-29 00:58:20 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:58:20.153530 | orchestrator | 2025-05-29 00:58:20 | INFO  | Task 2efa315a-60c7-4140-b3e7-09ffa1c9cbdf is in state STARTED 2025-05-29 00:58:20.153561 | orchestrator | 2025-05-29 00:58:20 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:58:23.201195 | orchestrator | 2025-05-29 00:58:23 | INFO  | Task 90449d2c-f0d9-4c77-bf28-592ce81763fa is in state SUCCESS 2025-05-29 00:58:23.202778 | orchestrator | 2025-05-29 00:58:23.202823 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.12 2025-05-29 00:58:23.202864 | orchestrator | 2025-05-29 00:58:23.202876 | orchestrator | PLAY [Prepare deployment of Ceph services] ************************************* 2025-05-29 00:58:23.202888 | orchestrator | 2025-05-29 00:58:23.202899 | orchestrator | TASK [ceph-facts : include_tasks convert_grafana_server_group_name.yml] ******** 2025-05-29 00:58:23.202910 | orchestrator | Thursday 29 May 2025 00:45:16 +0000 (0:00:01.575) 0:00:01.575 ********** 2025-05-29 00:58:23.202923 | orchestrator | included: /ansible/roles/ceph-facts/tasks/convert_grafana_server_group_name.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-29 00:58:23.202935 | orchestrator | 2025-05-29 00:58:23.202946 | orchestrator | TASK [ceph-facts : convert grafana-server group name if exist] ***************** 2025-05-29 00:58:23.202957 | orchestrator | Thursday 29 May 2025 00:45:18 +0000 (0:00:01.164) 0:00:02.739 ********** 2025-05-29 00:58:23.202969 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-0) 2025-05-29 00:58:23.202980 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-1) 2025-05-29 00:58:23.202991 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-2) 2025-05-29 00:58:23.203002 | orchestrator | 2025-05-29 00:58:23.203013 | orchestrator | TASK [ceph-facts : include facts.yml] ****************************************** 2025-05-29 00:58:23.203024 | orchestrator | Thursday 29 May 2025 00:45:18 +0000 (0:00:00.630) 0:00:03.370 ********** 2025-05-29 00:58:23.203036 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-29 00:58:23.203047 | orchestrator | 2025-05-29 00:58:23.203058 | orchestrator | TASK [ceph-facts : check if it is atomic host] ********************************* 2025-05-29 00:58:23.203068 | orchestrator | Thursday 29 May 2025 00:45:20 +0000 (0:00:01.393) 0:00:04.764 ********** 2025-05-29 00:58:23.203079 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:58:23.203090 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:58:23.203101 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:58:23.203112 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:58:23.203123 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:58:23.203187 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:58:23.203201 | orchestrator | 2025-05-29 00:58:23.203248 | orchestrator | TASK [ceph-facts : set_fact is_atomic] ***************************************** 2025-05-29 00:58:23.203261 | orchestrator | Thursday 29 May 2025 00:45:21 +0000 (0:00:01.393) 0:00:06.158 ********** 2025-05-29 00:58:23.203272 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:58:23.203283 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:58:23.203294 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:58:23.203304 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:58:23.203315 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:58:23.203326 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:58:23.203337 | orchestrator | 2025-05-29 00:58:23.203348 | orchestrator | TASK [ceph-facts : check if podman binary is present] ************************** 2025-05-29 00:58:23.203359 | orchestrator | Thursday 29 May 2025 00:45:22 +0000 (0:00:01.019) 0:00:07.177 ********** 2025-05-29 00:58:23.203369 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:58:23.203380 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:58:23.203391 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:58:23.203401 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:58:23.203412 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:58:23.203423 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:58:23.203433 | orchestrator | 2025-05-29 00:58:23.203444 | orchestrator | TASK [ceph-facts : set_fact container_binary] ********************************** 2025-05-29 00:58:23.203653 | orchestrator | Thursday 29 May 2025 00:45:23 +0000 (0:00:01.163) 0:00:08.340 ********** 2025-05-29 00:58:23.203680 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:58:23.203699 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:58:23.203719 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:58:23.203739 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:58:23.203759 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:58:23.203771 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:58:23.203795 | orchestrator | 2025-05-29 00:58:23.203806 | orchestrator | TASK [ceph-facts : set_fact ceph_cmd] ****************************************** 2025-05-29 00:58:23.203817 | orchestrator | Thursday 29 May 2025 00:45:24 +0000 (0:00:00.963) 0:00:09.304 ********** 2025-05-29 00:58:23.203828 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:58:23.203838 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:58:23.203849 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:58:23.203860 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:58:23.203870 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:58:23.203881 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:58:23.203891 | orchestrator | 2025-05-29 00:58:23.203902 | orchestrator | TASK [ceph-facts : set_fact discovered_interpreter_python] ********************* 2025-05-29 00:58:23.203913 | orchestrator | Thursday 29 May 2025 00:45:25 +0000 (0:00:00.632) 0:00:09.936 ********** 2025-05-29 00:58:23.203924 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:58:23.203934 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:58:23.203945 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:58:23.203956 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:58:23.203966 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:58:23.203977 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:58:23.203987 | orchestrator | 2025-05-29 00:58:23.203998 | orchestrator | TASK [ceph-facts : set_fact discovered_interpreter_python if not previously set] *** 2025-05-29 00:58:23.204009 | orchestrator | Thursday 29 May 2025 00:45:26 +0000 (0:00:00.935) 0:00:10.872 ********** 2025-05-29 00:58:23.204020 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.204032 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.204042 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.204053 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.204065 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.204075 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.204086 | orchestrator | 2025-05-29 00:58:23.204097 | orchestrator | TASK [ceph-facts : set_fact ceph_release ceph_stable_release] ****************** 2025-05-29 00:58:23.204108 | orchestrator | Thursday 29 May 2025 00:45:26 +0000 (0:00:00.723) 0:00:11.596 ********** 2025-05-29 00:58:23.204119 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:58:23.204130 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:58:23.204140 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:58:23.204151 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:58:23.204198 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:58:23.204235 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:58:23.204253 | orchestrator | 2025-05-29 00:58:23.204279 | orchestrator | TASK [ceph-facts : set_fact monitor_name ansible_facts['hostname']] ************ 2025-05-29 00:58:23.204291 | orchestrator | Thursday 29 May 2025 00:45:28 +0000 (0:00:01.091) 0:00:12.687 ********** 2025-05-29 00:58:23.204302 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-29 00:58:23.204313 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-29 00:58:23.204324 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-29 00:58:23.204335 | orchestrator | 2025-05-29 00:58:23.204345 | orchestrator | TASK [ceph-facts : set_fact container_exec_cmd] ******************************** 2025-05-29 00:58:23.204356 | orchestrator | Thursday 29 May 2025 00:45:28 +0000 (0:00:00.644) 0:00:13.332 ********** 2025-05-29 00:58:23.204367 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:58:23.204378 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:58:23.204512 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:58:23.204550 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:58:23.204563 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:58:23.204574 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:58:23.204584 | orchestrator | 2025-05-29 00:58:23.204595 | orchestrator | TASK [ceph-facts : find a running mon container] ******************************* 2025-05-29 00:58:23.204606 | orchestrator | Thursday 29 May 2025 00:45:30 +0000 (0:00:01.325) 0:00:14.657 ********** 2025-05-29 00:58:23.204617 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-0) 2025-05-29 00:58:23.204627 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-29 00:58:23.204646 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-29 00:58:23.204657 | orchestrator | 2025-05-29 00:58:23.204668 | orchestrator | TASK [ceph-facts : check for a ceph mon socket] ******************************** 2025-05-29 00:58:23.204679 | orchestrator | Thursday 29 May 2025 00:45:32 +0000 (0:00:02.835) 0:00:17.492 ********** 2025-05-29 00:58:23.204690 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-29 00:58:23.204700 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-29 00:58:23.204711 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-29 00:58:23.204722 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.204733 | orchestrator | 2025-05-29 00:58:23.204751 | orchestrator | TASK [ceph-facts : check if the ceph mon socket is in-use] ********************* 2025-05-29 00:58:23.204762 | orchestrator | Thursday 29 May 2025 00:45:33 +0000 (0:00:00.422) 0:00:17.914 ********** 2025-05-29 00:58:23.204776 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-05-29 00:58:23.204790 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-05-29 00:58:23.204802 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-05-29 00:58:23.204813 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.204824 | orchestrator | 2025-05-29 00:58:23.204835 | orchestrator | TASK [ceph-facts : set_fact running_mon - non_container] *********************** 2025-05-29 00:58:23.204845 | orchestrator | Thursday 29 May 2025 00:45:34 +0000 (0:00:00.837) 0:00:18.752 ********** 2025-05-29 00:58:23.204859 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-05-29 00:58:23.204873 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-05-29 00:58:23.204884 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-05-29 00:58:23.204895 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.204907 | orchestrator | 2025-05-29 00:58:23.204918 | orchestrator | TASK [ceph-facts : set_fact running_mon - container] *************************** 2025-05-29 00:58:23.204937 | orchestrator | Thursday 29 May 2025 00:45:34 +0000 (0:00:00.314) 0:00:19.066 ********** 2025-05-29 00:58:23.204950 | orchestrator | skipping: [testbed-node-0] => (item={'changed': True, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-05-29 00:45:30.806975', 'end': '2025-05-29 00:45:31.074839', 'delta': '0:00:00.267864', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-05-29 00:58:23.204971 | orchestrator | skipping: [testbed-node-0] => (item={'changed': True, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-05-29 00:45:31.642868', 'end': '2025-05-29 00:45:31.920442', 'delta': '0:00:00.277574', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-05-29 00:58:23.204988 | orchestrator | skipping: [testbed-node-0] => (item={'changed': True, 'stdout': '', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-05-29 00:45:32.453758', 'end': '2025-05-29 00:45:32.719065', 'delta': '0:00:00.265307', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': [], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-05-29 00:58:23.205000 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.205011 | orchestrator | 2025-05-29 00:58:23.205022 | orchestrator | TASK [ceph-facts : set_fact _container_exec_cmd] ******************************* 2025-05-29 00:58:23.205033 | orchestrator | Thursday 29 May 2025 00:45:34 +0000 (0:00:00.322) 0:00:19.389 ********** 2025-05-29 00:58:23.205044 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:58:23.205055 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:58:23.205066 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:58:23.205076 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:58:23.205087 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:58:23.205098 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:58:23.205109 | orchestrator | 2025-05-29 00:58:23.205119 | orchestrator | TASK [ceph-facts : get current fsid if cluster is already running] ************* 2025-05-29 00:58:23.205130 | orchestrator | Thursday 29 May 2025 00:45:36 +0000 (0:00:01.860) 0:00:21.249 ********** 2025-05-29 00:58:23.205141 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:58:23.205152 | orchestrator | 2025-05-29 00:58:23.205163 | orchestrator | TASK [ceph-facts : set_fact current_fsid rc 1] ********************************* 2025-05-29 00:58:23.205173 | orchestrator | Thursday 29 May 2025 00:45:37 +0000 (0:00:00.627) 0:00:21.876 ********** 2025-05-29 00:58:23.205187 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.205207 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.205262 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.205282 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.205346 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.205368 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.205386 | orchestrator | 2025-05-29 00:58:23.205403 | orchestrator | TASK [ceph-facts : get current fsid] ******************************************* 2025-05-29 00:58:23.205419 | orchestrator | Thursday 29 May 2025 00:45:37 +0000 (0:00:00.593) 0:00:22.470 ********** 2025-05-29 00:58:23.205436 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.205452 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.205468 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.205484 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.205513 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.205530 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.205546 | orchestrator | 2025-05-29 00:58:23.205563 | orchestrator | TASK [ceph-facts : set_fact fsid] ********************************************** 2025-05-29 00:58:23.205580 | orchestrator | Thursday 29 May 2025 00:45:39 +0000 (0:00:01.179) 0:00:23.650 ********** 2025-05-29 00:58:23.205597 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.205614 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.205631 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.205883 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.205903 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.205922 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.205941 | orchestrator | 2025-05-29 00:58:23.205958 | orchestrator | TASK [ceph-facts : set_fact fsid from current_fsid] **************************** 2025-05-29 00:58:23.205978 | orchestrator | Thursday 29 May 2025 00:45:39 +0000 (0:00:00.729) 0:00:24.379 ********** 2025-05-29 00:58:23.206012 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.206084 | orchestrator | 2025-05-29 00:58:23.206096 | orchestrator | TASK [ceph-facts : generate cluster fsid] ************************************** 2025-05-29 00:58:23.206107 | orchestrator | Thursday 29 May 2025 00:45:39 +0000 (0:00:00.153) 0:00:24.532 ********** 2025-05-29 00:58:23.206118 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.206129 | orchestrator | 2025-05-29 00:58:23.206140 | orchestrator | TASK [ceph-facts : set_fact fsid] ********************************************** 2025-05-29 00:58:23.206150 | orchestrator | Thursday 29 May 2025 00:45:40 +0000 (0:00:00.732) 0:00:25.265 ********** 2025-05-29 00:58:23.206161 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.206172 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.206183 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.206193 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.206204 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.206235 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.206246 | orchestrator | 2025-05-29 00:58:23.206257 | orchestrator | TASK [ceph-facts : resolve device link(s)] ************************************* 2025-05-29 00:58:23.206268 | orchestrator | Thursday 29 May 2025 00:45:41 +0000 (0:00:00.812) 0:00:26.078 ********** 2025-05-29 00:58:23.206279 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.206290 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.206301 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.206311 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.206322 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.206332 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.206343 | orchestrator | 2025-05-29 00:58:23.206354 | orchestrator | TASK [ceph-facts : set_fact build devices from resolved symlinks] ************** 2025-05-29 00:58:23.206365 | orchestrator | Thursday 29 May 2025 00:45:42 +0000 (0:00:01.037) 0:00:27.115 ********** 2025-05-29 00:58:23.206376 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.206387 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.206397 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.206408 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.206419 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.206429 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.206440 | orchestrator | 2025-05-29 00:58:23.206451 | orchestrator | TASK [ceph-facts : resolve dedicated_device link(s)] *************************** 2025-05-29 00:58:23.206462 | orchestrator | Thursday 29 May 2025 00:45:43 +0000 (0:00:00.735) 0:00:27.851 ********** 2025-05-29 00:58:23.206481 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.206492 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.206503 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.206513 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.206524 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.206534 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.206545 | orchestrator | 2025-05-29 00:58:23.206556 | orchestrator | TASK [ceph-facts : set_fact build dedicated_devices from resolved symlinks] **** 2025-05-29 00:58:23.206637 | orchestrator | Thursday 29 May 2025 00:45:44 +0000 (0:00:00.953) 0:00:28.805 ********** 2025-05-29 00:58:23.206650 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.206661 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.206703 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.206714 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.206725 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.206736 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.206746 | orchestrator | 2025-05-29 00:58:23.206757 | orchestrator | TASK [ceph-facts : resolve bluestore_wal_device link(s)] *********************** 2025-05-29 00:58:23.206768 | orchestrator | Thursday 29 May 2025 00:45:44 +0000 (0:00:00.638) 0:00:29.443 ********** 2025-05-29 00:58:23.206779 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.206790 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.206800 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.206856 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.206869 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.206880 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.206891 | orchestrator | 2025-05-29 00:58:23.206902 | orchestrator | TASK [ceph-facts : set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-05-29 00:58:23.206913 | orchestrator | Thursday 29 May 2025 00:45:45 +0000 (0:00:00.885) 0:00:30.328 ********** 2025-05-29 00:58:23.206924 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.206935 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.206945 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.206956 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.206967 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.206978 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.206989 | orchestrator | 2025-05-29 00:58:23.207000 | orchestrator | TASK [ceph-facts : set_fact devices generate device list when osd_auto_discovery] *** 2025-05-29 00:58:23.207010 | orchestrator | Thursday 29 May 2025 00:45:46 +0000 (0:00:00.697) 0:00:31.026 ********** 2025-05-29 00:58:23.207122 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-29 00:58:23.207138 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-29 00:58:23.207161 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-29 00:58:23.207173 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-29 00:58:23.207184 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-29 00:58:23.207209 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-29 00:58:23.207282 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-29 00:58:23.207294 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-29 00:58:23.207319 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_597f54b7-c847-4d10-a166-56462537237d', 'scsi-SQEMU_QEMU_HARDDISK_597f54b7-c847-4d10-a166-56462537237d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_597f54b7-c847-4d10-a166-56462537237d-part1', 'scsi-SQEMU_QEMU_HARDDISK_597f54b7-c847-4d10-a166-56462537237d-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_597f54b7-c847-4d10-a166-56462537237d-part14', 'scsi-SQEMU_QEMU_HARDDISK_597f54b7-c847-4d10-a166-56462537237d-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_597f54b7-c847-4d10-a166-56462537237d-part15', 'scsi-SQEMU_QEMU_HARDDISK_597f54b7-c847-4d10-a166-56462537237d-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_597f54b7-c847-4d10-a166-56462537237d-part16', 'scsi-SQEMU_QEMU_HARDDISK_597f54b7-c847-4d10-a166-56462537237d-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-29 00:58:23.207335 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-29-00-02-00-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-29 00:58:23.207356 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-29 00:58:23.207373 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-29 00:58:23.207384 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-29 00:58:23.207395 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-29 00:58:23.207407 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-29 00:58:23.207418 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.207430 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-29 00:58:23.207440 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-29 00:58:23.207456 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-29 00:58:23.207477 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dcea969f-ad3c-4191-bf1d-aa670bfd6fcb', 'scsi-SQEMU_QEMU_HARDDISK_dcea969f-ad3c-4191-bf1d-aa670bfd6fcb'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dcea969f-ad3c-4191-bf1d-aa670bfd6fcb-part1', 'scsi-SQEMU_QEMU_HARDDISK_dcea969f-ad3c-4191-bf1d-aa670bfd6fcb-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dcea969f-ad3c-4191-bf1d-aa670bfd6fcb-part14', 'scsi-SQEMU_QEMU_HARDDISK_dcea969f-ad3c-4191-bf1d-aa670bfd6fcb-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dcea969f-ad3c-4191-bf1d-aa670bfd6fcb-part15', 'scsi-SQEMU_QEMU_HARDDISK_dcea969f-ad3c-4191-bf1d-aa670bfd6fcb-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_dcea969f-ad3c-4191-bf1d-aa670bfd6fcb-part16', 'scsi-SQEMU_QEMU_HARDDISK_dcea969f-ad3c-4191-bf1d-aa670bfd6fcb-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-29 00:58:23.207495 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-29-00-02-02-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-29 00:58:23.207506 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-29 00:58:23.207516 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-29 00:58:23.207533 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-29 00:58:23.207551 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-29 00:58:23.207632 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-29 00:58:23.207653 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-29 00:58:23.207717 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.207735 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-29 00:58:23.207745 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-29 00:58:23.207765 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_43db7570-342a-4138-8a11-552755fecf02', 'scsi-SQEMU_QEMU_HARDDISK_43db7570-342a-4138-8a11-552755fecf02'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_43db7570-342a-4138-8a11-552755fecf02-part1', 'scsi-SQEMU_QEMU_HARDDISK_43db7570-342a-4138-8a11-552755fecf02-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_43db7570-342a-4138-8a11-552755fecf02-part14', 'scsi-SQEMU_QEMU_HARDDISK_43db7570-342a-4138-8a11-552755fecf02-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_43db7570-342a-4138-8a11-552755fecf02-part15', 'scsi-SQEMU_QEMU_HARDDISK_43db7570-342a-4138-8a11-552755fecf02-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_43db7570-342a-4138-8a11-552755fecf02-part16', 'scsi-SQEMU_QEMU_HARDDISK_43db7570-342a-4138-8a11-552755fecf02-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-29 00:58:23.207785 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-29-00-02-07-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-29 00:58:23.207796 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b02a0e5a--ac94--54a1--88a1--38ba26e145f6-osd--block--b02a0e5a--ac94--54a1--88a1--38ba26e145f6', 'dm-uuid-LVM-kFkfR2mg2uG0RdKoScCCsYXIzL1wUaDrnsW8OabwjzP4k0MKWfHuFtPoPX27hc2A'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-29 00:58:23.207813 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--81bd5020--0460--5411--80bb--35101e63cce8-osd--block--81bd5020--0460--5411--80bb--35101e63cce8', 'dm-uuid-LVM-ku1tZkcyLWSOzUUaMbnfRMIwJ6AfKUfx75lphOGtk86nN57fRqJNVrWLW44XINSF'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-29 00:58:23.207823 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-29 00:58:23.207834 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-29 00:58:23.207844 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-29 00:58:23.207854 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-29 00:58:23.207863 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.207880 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-29 00:58:23.207897 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-29 00:58:23.207907 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-29 00:58:23.207917 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-29 00:58:23.207932 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3e9d3af7-34b1-4fa5-b4a2-fbeb047fa155', 'scsi-SQEMU_QEMU_HARDDISK_3e9d3af7-34b1-4fa5-b4a2-fbeb047fa155'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3e9d3af7-34b1-4fa5-b4a2-fbeb047fa155-part1', 'scsi-SQEMU_QEMU_HARDDISK_3e9d3af7-34b1-4fa5-b4a2-fbeb047fa155-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3e9d3af7-34b1-4fa5-b4a2-fbeb047fa155-part14', 'scsi-SQEMU_QEMU_HARDDISK_3e9d3af7-34b1-4fa5-b4a2-fbeb047fa155-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3e9d3af7-34b1-4fa5-b4a2-fbeb047fa155-part15', 'scsi-SQEMU_QEMU_HARDDISK_3e9d3af7-34b1-4fa5-b4a2-fbeb047fa155-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3e9d3af7-34b1-4fa5-b4a2-fbeb047fa155-part16', 'scsi-SQEMU_QEMU_HARDDISK_3e9d3af7-34b1-4fa5-b4a2-fbeb047fa155-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-29 00:58:23.207949 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--b02a0e5a--ac94--54a1--88a1--38ba26e145f6-osd--block--b02a0e5a--ac94--54a1--88a1--38ba26e145f6'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-VE8tYJ-XxAf-GuWx-HiSz-BoTR-ozoD-esy6Od', 'scsi-0QEMU_QEMU_HARDDISK_172ad3b6-4b22-4cdf-a28e-ac5da2182fda', 'scsi-SQEMU_QEMU_HARDDISK_172ad3b6-4b22-4cdf-a28e-ac5da2182fda'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-29 00:58:23.207967 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--2961dba5--5d3e--5262--aab3--a8717ef28b96-osd--block--2961dba5--5d3e--5262--aab3--a8717ef28b96', 'dm-uuid-LVM-90PxGUbVkc7IBnExigfQK6mIHuEE3fo2YCFDhVHi9nN3OmL4GdmB6KJkvKJRQ3Er'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-29 00:58:23.207982 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--81bd5020--0460--5411--80bb--35101e63cce8-osd--block--81bd5020--0460--5411--80bb--35101e63cce8'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-0sGu9Q-DbTo-vXNO-bcWg-Jvqq-5Jh2-vInOeY', 'scsi-0QEMU_QEMU_HARDDISK_81c2fe1f-38cc-49f7-ae7d-3d898626253d', 'scsi-SQEMU_QEMU_HARDDISK_81c2fe1f-38cc-49f7-ae7d-3d898626253d'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-29 00:58:23.207992 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--10c8172d--d6a1--5b27--956e--8c5bc818fcb1-osd--block--10c8172d--d6a1--5b27--956e--8c5bc818fcb1', 'dm-uuid-LVM-u69ClKhH4KooD2hXW2P2vYFyL8r4YNPHuZb8s4q5kdBdnPfjgAVJ1FqcD3h80XjK'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-29 00:58:23.208003 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-29 00:58:23.208014 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_872f8c6a-38b8-4598-af69-d174e2488207', 'scsi-SQEMU_QEMU_HARDDISK_872f8c6a-38b8-4598-af69-d174e2488207'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-29 00:58:23.208025 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-29 00:58:23.208046 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-29-00-02-05-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-29 00:58:23.208057 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-29 00:58:23.208068 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-29 00:58:23.208078 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-29 00:58:23.208092 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-29 00:58:23.208101 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-29 00:58:23.208111 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-29 00:58:23.208140 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c7ad4de3-4f57-4eb1-a9f0-bec4cfb4ae61', 'scsi-SQEMU_QEMU_HARDDISK_c7ad4de3-4f57-4eb1-a9f0-bec4cfb4ae61'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c7ad4de3-4f57-4eb1-a9f0-bec4cfb4ae61-part1', 'scsi-SQEMU_QEMU_HARDDISK_c7ad4de3-4f57-4eb1-a9f0-bec4cfb4ae61-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c7ad4de3-4f57-4eb1-a9f0-bec4cfb4ae61-part14', 'scsi-SQEMU_QEMU_HARDDISK_c7ad4de3-4f57-4eb1-a9f0-bec4cfb4ae61-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c7ad4de3-4f57-4eb1-a9f0-bec4cfb4ae61-part15', 'scsi-SQEMU_QEMU_HARDDISK_c7ad4de3-4f57-4eb1-a9f0-bec4cfb4ae61-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c7ad4de3-4f57-4eb1-a9f0-bec4cfb4ae61-part16', 'scsi-SQEMU_QEMU_HARDDISK_c7ad4de3-4f57-4eb1-a9f0-bec4cfb4ae61-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-29 00:58:23.208159 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--2961dba5--5d3e--5262--aab3--a8717ef28b96-osd--block--2961dba5--5d3e--5262--aab3--a8717ef28b96'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-nJs0rF-5fyc-psQj-NDDF-8xau-LUmY-o2A3mw', 'scsi-0QEMU_QEMU_HARDDISK_d4d6d7dc-ffab-40f4-8a14-6defed4afc9f', 'scsi-SQEMU_QEMU_HARDDISK_d4d6d7dc-ffab-40f4-8a14-6defed4afc9f'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-29 00:58:23.208174 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--10c8172d--d6a1--5b27--956e--8c5bc818fcb1-osd--block--10c8172d--d6a1--5b27--956e--8c5bc818fcb1'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-eSK637-oDdC-qeoY-pFAl-3PfZ-Ha94-Q8K5VH', 'scsi-0QEMU_QEMU_HARDDISK_ab52b3eb-0fd7-41fe-9d4d-bdc516081274', 'scsi-SQEMU_QEMU_HARDDISK_ab52b3eb-0fd7-41fe-9d4d-bdc516081274'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-29 00:58:23.208185 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f7dbb189-5858-4eca-9499-fceb9ae8f8d2', 'scsi-SQEMU_QEMU_HARDDISK_f7dbb189-5858-4eca-9499-fceb9ae8f8d2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-29 00:58:23.208195 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-29-00-02-01-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-29 00:58:23.208230 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.208241 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.208258 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--a1850b6b--a1b4--57b7--9f5e--deb9029890df-osd--block--a1850b6b--a1b4--57b7--9f5e--deb9029890df', 'dm-uuid-LVM-ffTB4yYFOzqyp9l6VNjtab7UyxHeVzSNRk5JG42cW1fCAxN71z7Fj9Ahix4r12LQ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-29 00:58:23.208268 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--05ae814f--03ae--5777--aef4--91f0b0270e90-osd--block--05ae814f--03ae--5777--aef4--91f0b0270e90', 'dm-uuid-LVM-tkriDHC3ygW15DdnKwLVvbc47iLkDPk4WFWX9fhAHuNww6Kf6WAHtjisspirk1mO'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-29 00:58:23.208279 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-29 00:58:23.208289 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-29 00:58:23.208307 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-29 00:58:23.208317 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-29 00:58:23.208327 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-29 00:58:23.208337 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-29 00:58:23.208353 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-29 00:58:23.208363 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-29 00:58:23.208386 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_13985d86-b513-49a7-ae6a-0b62fccaa428', 'scsi-SQEMU_QEMU_HARDDISK_13985d86-b513-49a7-ae6a-0b62fccaa428'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_13985d86-b513-49a7-ae6a-0b62fccaa428-part1', 'scsi-SQEMU_QEMU_HARDDISK_13985d86-b513-49a7-ae6a-0b62fccaa428-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_13985d86-b513-49a7-ae6a-0b62fccaa428-part14', 'scsi-SQEMU_QEMU_HARDDISK_13985d86-b513-49a7-ae6a-0b62fccaa428-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_13985d86-b513-49a7-ae6a-0b62fccaa428-part15', 'scsi-SQEMU_QEMU_HARDDISK_13985d86-b513-49a7-ae6a-0b62fccaa428-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_13985d86-b513-49a7-ae6a-0b62fccaa428-part16', 'scsi-SQEMU_QEMU_HARDDISK_13985d86-b513-49a7-ae6a-0b62fccaa428-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-29 00:58:23.208398 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--a1850b6b--a1b4--57b7--9f5e--deb9029890df-osd--block--a1850b6b--a1b4--57b7--9f5e--deb9029890df'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-a3AfDo-SWFj-jdr1-Im7o-k563-sVpW-78YTEC', 'scsi-0QEMU_QEMU_HARDDISK_baffed07-1ba6-4c69-bef3-fae49f76e29e', 'scsi-SQEMU_QEMU_HARDDISK_baffed07-1ba6-4c69-bef3-fae49f76e29e'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-29 00:58:23.208417 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--05ae814f--03ae--5777--aef4--91f0b0270e90-osd--block--05ae814f--03ae--5777--aef4--91f0b0270e90'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-ugLbjG-hbub-KRlg-RZhO-5WRL-ezxt-RTsC3p', 'scsi-0QEMU_QEMU_HARDDISK_6be5e360-5fe4-4176-98be-0e33dc067da2', 'scsi-SQEMU_QEMU_HARDDISK_6be5e360-5fe4-4176-98be-0e33dc067da2'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-29 00:58:23.208433 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c045ec7e-dfd2-45aa-a5da-e7ebbe64f976', 'scsi-SQEMU_QEMU_HARDDISK_c045ec7e-dfd2-45aa-a5da-e7ebbe64f976'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-29 00:58:23.208444 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-29-00-02-04-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-29 00:58:23.208454 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.208463 | orchestrator | 2025-05-29 00:58:23.208473 | orchestrator | TASK [ceph-facts : get ceph current status] ************************************ 2025-05-29 00:58:23.208483 | orchestrator | Thursday 29 May 2025 00:45:48 +0000 (0:00:01.725) 0:00:32.751 ********** 2025-05-29 00:58:23.208493 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.208502 | orchestrator | 2025-05-29 00:58:23.208512 | orchestrator | TASK [ceph-facts : set_fact ceph_current_status] ******************************* 2025-05-29 00:58:23.208522 | orchestrator | Thursday 29 May 2025 00:45:48 +0000 (0:00:00.258) 0:00:33.010 ********** 2025-05-29 00:58:23.208531 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.208541 | orchestrator | 2025-05-29 00:58:23.208703 | orchestrator | TASK [ceph-facts : set_fact rgw_hostname] ************************************** 2025-05-29 00:58:23.208717 | orchestrator | Thursday 29 May 2025 00:45:48 +0000 (0:00:00.170) 0:00:33.180 ********** 2025-05-29 00:58:23.208726 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.208736 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.208745 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.208755 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.208764 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.208774 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.208789 | orchestrator | 2025-05-29 00:58:23.208798 | orchestrator | TASK [ceph-facts : check if the ceph conf exists] ****************************** 2025-05-29 00:58:23.208808 | orchestrator | Thursday 29 May 2025 00:45:49 +0000 (0:00:00.875) 0:00:34.055 ********** 2025-05-29 00:58:23.208817 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:58:23.208827 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:58:23.208837 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:58:23.208846 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:58:23.208855 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:58:23.208865 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:58:23.208874 | orchestrator | 2025-05-29 00:58:23.208884 | orchestrator | TASK [ceph-facts : set default osd_pool_default_crush_rule fact] *************** 2025-05-29 00:58:23.208901 | orchestrator | Thursday 29 May 2025 00:45:51 +0000 (0:00:01.600) 0:00:35.655 ********** 2025-05-29 00:58:23.208910 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:58:23.208920 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:58:23.208930 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:58:23.208939 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:58:23.208948 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:58:23.208958 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:58:23.208967 | orchestrator | 2025-05-29 00:58:23.208977 | orchestrator | TASK [ceph-facts : read osd pool default crush rule] *************************** 2025-05-29 00:58:23.208986 | orchestrator | Thursday 29 May 2025 00:45:51 +0000 (0:00:00.871) 0:00:36.527 ********** 2025-05-29 00:58:23.208996 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.209005 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.209015 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.209024 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.209050 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.209060 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.209070 | orchestrator | 2025-05-29 00:58:23.209079 | orchestrator | TASK [ceph-facts : set osd_pool_default_crush_rule fact] *********************** 2025-05-29 00:58:23.209089 | orchestrator | Thursday 29 May 2025 00:45:52 +0000 (0:00:01.072) 0:00:37.600 ********** 2025-05-29 00:58:23.209098 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.209108 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.209117 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.209127 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.209136 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.209146 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.209156 | orchestrator | 2025-05-29 00:58:23.209165 | orchestrator | TASK [ceph-facts : read osd pool default crush rule] *************************** 2025-05-29 00:58:23.209175 | orchestrator | Thursday 29 May 2025 00:45:53 +0000 (0:00:00.791) 0:00:38.391 ********** 2025-05-29 00:58:23.209184 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.209194 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.209242 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.209253 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.209262 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.209272 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.209282 | orchestrator | 2025-05-29 00:58:23.209291 | orchestrator | TASK [ceph-facts : set osd_pool_default_crush_rule fact] *********************** 2025-05-29 00:58:23.209301 | orchestrator | Thursday 29 May 2025 00:45:54 +0000 (0:00:01.082) 0:00:39.474 ********** 2025-05-29 00:58:23.209311 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.209320 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.209329 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.209339 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.209348 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.209357 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.209405 | orchestrator | 2025-05-29 00:58:23.209416 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv4] *** 2025-05-29 00:58:23.209425 | orchestrator | Thursday 29 May 2025 00:45:55 +0000 (0:00:01.009) 0:00:40.484 ********** 2025-05-29 00:58:23.209435 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-29 00:58:23.209451 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-29 00:58:23.209462 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-29 00:58:23.209478 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-05-29 00:58:23.209495 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.209512 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-05-29 00:58:23.209529 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-05-29 00:58:23.209546 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-05-29 00:58:23.209572 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-05-29 00:58:23.209582 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-05-29 00:58:23.209591 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-05-29 00:58:23.209601 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-05-29 00:58:23.209611 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.209620 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.209630 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-05-29 00:58:23.209640 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-05-29 00:58:23.209649 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-05-29 00:58:23.209659 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.209668 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-05-29 00:58:23.209678 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.209688 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-05-29 00:58:23.209697 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-05-29 00:58:23.209707 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-05-29 00:58:23.209716 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.209726 | orchestrator | 2025-05-29 00:58:23.209882 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv6] *** 2025-05-29 00:58:23.209892 | orchestrator | Thursday 29 May 2025 00:45:59 +0000 (0:00:03.351) 0:00:43.835 ********** 2025-05-29 00:58:23.209902 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-29 00:58:23.209919 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-29 00:58:23.209928 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-05-29 00:58:23.209938 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-05-29 00:58:23.209947 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-29 00:58:23.209957 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-05-29 00:58:23.209966 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-05-29 00:58:23.209976 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.209985 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-05-29 00:58:23.209995 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-05-29 00:58:23.210004 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-05-29 00:58:23.210014 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.210052 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-05-29 00:58:23.210062 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.210071 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-05-29 00:58:23.210081 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-05-29 00:58:23.210090 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-05-29 00:58:23.210100 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.210109 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-05-29 00:58:23.210119 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-05-29 00:58:23.210129 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-05-29 00:58:23.210138 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.210148 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-05-29 00:58:23.210157 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.210167 | orchestrator | 2025-05-29 00:58:23.210181 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address] ************* 2025-05-29 00:58:23.210199 | orchestrator | Thursday 29 May 2025 00:46:01 +0000 (0:00:01.983) 0:00:45.818 ********** 2025-05-29 00:58:23.210239 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-29 00:58:23.210258 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-0) 2025-05-29 00:58:23.210296 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-0) 2025-05-29 00:58:23.210312 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-05-29 00:58:23.210331 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-05-29 00:58:23.210347 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-05-29 00:58:23.210364 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-1) 2025-05-29 00:58:23.210381 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-05-29 00:58:23.210391 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-1) 2025-05-29 00:58:23.210401 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-05-29 00:58:23.210410 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-05-29 00:58:23.210420 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-05-29 00:58:23.210429 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-05-29 00:58:23.210439 | orchestrator | ok: [testbed-node-2] => (item=testbed-node-2) 2025-05-29 00:58:23.210448 | orchestrator | ok: [testbed-node-1] => (item=testbed-node-2) 2025-05-29 00:58:23.210458 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-05-29 00:58:23.210514 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-05-29 00:58:23.210525 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-05-29 00:58:23.210534 | orchestrator | 2025-05-29 00:58:23.210544 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_interface - ipv4] **** 2025-05-29 00:58:23.210564 | orchestrator | Thursday 29 May 2025 00:46:06 +0000 (0:00:05.123) 0:00:50.941 ********** 2025-05-29 00:58:23.210574 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-29 00:58:23.210584 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-29 00:58:23.210593 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-29 00:58:23.210603 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.210612 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-05-29 00:58:23.210621 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-05-29 00:58:23.210631 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-05-29 00:58:23.210640 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.210649 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-05-29 00:58:23.210659 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-05-29 00:58:23.210668 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-05-29 00:58:23.210677 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-05-29 00:58:23.210686 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-05-29 00:58:23.210696 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-05-29 00:58:23.210705 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.210714 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-05-29 00:58:23.210724 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-05-29 00:58:23.210734 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-05-29 00:58:23.210743 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.210752 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.210762 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-05-29 00:58:23.210771 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-05-29 00:58:23.210780 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-05-29 00:58:23.210790 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.210799 | orchestrator | 2025-05-29 00:58:23.210808 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_interface - ipv6] **** 2025-05-29 00:58:23.210825 | orchestrator | Thursday 29 May 2025 00:46:07 +0000 (0:00:01.361) 0:00:52.303 ********** 2025-05-29 00:58:23.210835 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-29 00:58:23.210852 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-29 00:58:23.210862 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-29 00:58:23.210871 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.210881 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-0)  2025-05-29 00:58:23.210890 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-1)  2025-05-29 00:58:23.210900 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-2)  2025-05-29 00:58:23.210909 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-0)  2025-05-29 00:58:23.210918 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-1)  2025-05-29 00:58:23.210928 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.210937 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-2)  2025-05-29 00:58:23.210946 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-05-29 00:58:23.210955 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-05-29 00:58:23.210965 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.210974 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-05-29 00:58:23.210983 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-05-29 00:58:23.210993 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-05-29 00:58:23.211002 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-05-29 00:58:23.211011 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.211021 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.211030 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-05-29 00:58:23.211040 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-05-29 00:58:23.211049 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-05-29 00:58:23.211058 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.211068 | orchestrator | 2025-05-29 00:58:23.211077 | orchestrator | TASK [ceph-facts : set_fact _current_monitor_address] ************************** 2025-05-29 00:58:23.211087 | orchestrator | Thursday 29 May 2025 00:46:08 +0000 (0:00:01.142) 0:00:53.445 ********** 2025-05-29 00:58:23.211096 | orchestrator | ok: [testbed-node-0] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'}) 2025-05-29 00:58:23.211106 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-05-29 00:58:23.211116 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-05-29 00:58:23.211126 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-05-29 00:58:23.211135 | orchestrator | ok: [testbed-node-1] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'}) 2025-05-29 00:58:23.211145 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-05-29 00:58:23.211154 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-05-29 00:58:23.211164 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-05-29 00:58:23.211173 | orchestrator | ok: [testbed-node-2] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'}) 2025-05-29 00:58:23.211188 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-05-29 00:58:23.211198 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-05-29 00:58:23.211207 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-05-29 00:58:23.211240 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.211251 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-05-29 00:58:23.211260 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-05-29 00:58:23.211276 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-05-29 00:58:23.211286 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.211296 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-05-29 00:58:23.211305 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-05-29 00:58:23.211315 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-05-29 00:58:23.211324 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.211334 | orchestrator | 2025-05-29 00:58:23.211343 | orchestrator | TASK [ceph-facts : import_tasks set_radosgw_address.yml] *********************** 2025-05-29 00:58:23.211353 | orchestrator | Thursday 29 May 2025 00:46:10 +0000 (0:00:01.726) 0:00:55.172 ********** 2025-05-29 00:58:23.211362 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.211372 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.211382 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.211391 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-29 00:58:23.211401 | orchestrator | 2025-05-29 00:58:23.211411 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-05-29 00:58:23.211421 | orchestrator | Thursday 29 May 2025 00:46:12 +0000 (0:00:01.832) 0:00:57.005 ********** 2025-05-29 00:58:23.211431 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.211440 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.211450 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.211459 | orchestrator | 2025-05-29 00:58:23.211469 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-05-29 00:58:23.211479 | orchestrator | Thursday 29 May 2025 00:46:13 +0000 (0:00:00.653) 0:00:57.659 ********** 2025-05-29 00:58:23.211488 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.211498 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.211507 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.211517 | orchestrator | 2025-05-29 00:58:23.211527 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-05-29 00:58:23.211544 | orchestrator | Thursday 29 May 2025 00:46:13 +0000 (0:00:00.853) 0:00:58.512 ********** 2025-05-29 00:58:23.211562 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.211589 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.211606 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.211623 | orchestrator | 2025-05-29 00:58:23.211639 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-05-29 00:58:23.211656 | orchestrator | Thursday 29 May 2025 00:46:14 +0000 (0:00:00.653) 0:00:59.166 ********** 2025-05-29 00:58:23.211675 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:58:23.211692 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:58:23.211709 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:58:23.211720 | orchestrator | 2025-05-29 00:58:23.211729 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-05-29 00:58:23.211739 | orchestrator | Thursday 29 May 2025 00:46:15 +0000 (0:00:00.881) 0:01:00.047 ********** 2025-05-29 00:58:23.211749 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-29 00:58:23.211758 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-29 00:58:23.211768 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-29 00:58:23.211777 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.211787 | orchestrator | 2025-05-29 00:58:23.211797 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-05-29 00:58:23.211806 | orchestrator | Thursday 29 May 2025 00:46:16 +0000 (0:00:00.679) 0:01:00.727 ********** 2025-05-29 00:58:23.211816 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-29 00:58:23.211826 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-29 00:58:23.211845 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-29 00:58:23.211855 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.211864 | orchestrator | 2025-05-29 00:58:23.211874 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-05-29 00:58:23.211884 | orchestrator | Thursday 29 May 2025 00:46:16 +0000 (0:00:00.829) 0:01:01.557 ********** 2025-05-29 00:58:23.211893 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-29 00:58:23.211903 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-29 00:58:23.211913 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-29 00:58:23.211923 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.211932 | orchestrator | 2025-05-29 00:58:23.211942 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-29 00:58:23.211952 | orchestrator | Thursday 29 May 2025 00:46:17 +0000 (0:00:00.919) 0:01:02.476 ********** 2025-05-29 00:58:23.211962 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:58:23.211971 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:58:23.211981 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:58:23.211991 | orchestrator | 2025-05-29 00:58:23.212000 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-05-29 00:58:23.212018 | orchestrator | Thursday 29 May 2025 00:46:18 +0000 (0:00:00.453) 0:01:02.929 ********** 2025-05-29 00:58:23.212028 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-05-29 00:58:23.212038 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-05-29 00:58:23.212048 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-05-29 00:58:23.212057 | orchestrator | 2025-05-29 00:58:23.212067 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-05-29 00:58:23.212076 | orchestrator | Thursday 29 May 2025 00:46:19 +0000 (0:00:01.065) 0:01:03.995 ********** 2025-05-29 00:58:23.212086 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.212095 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.212105 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.212114 | orchestrator | 2025-05-29 00:58:23.212124 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-29 00:58:23.212133 | orchestrator | Thursday 29 May 2025 00:46:20 +0000 (0:00:00.728) 0:01:04.723 ********** 2025-05-29 00:58:23.212143 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.212152 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.212162 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.212171 | orchestrator | 2025-05-29 00:58:23.212181 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-05-29 00:58:23.212239 | orchestrator | Thursday 29 May 2025 00:46:20 +0000 (0:00:00.636) 0:01:05.360 ********** 2025-05-29 00:58:23.212251 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-05-29 00:58:23.212261 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-05-29 00:58:23.212270 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.212280 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.212289 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-05-29 00:58:23.212299 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.212308 | orchestrator | 2025-05-29 00:58:23.212318 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-05-29 00:58:23.212327 | orchestrator | Thursday 29 May 2025 00:46:21 +0000 (0:00:00.817) 0:01:06.177 ********** 2025-05-29 00:58:23.212337 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-05-29 00:58:23.212347 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.212362 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-05-29 00:58:23.212372 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.212381 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-05-29 00:58:23.212402 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.212411 | orchestrator | 2025-05-29 00:58:23.212421 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-05-29 00:58:23.212431 | orchestrator | Thursday 29 May 2025 00:46:22 +0000 (0:00:00.761) 0:01:06.939 ********** 2025-05-29 00:58:23.212440 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-29 00:58:23.212450 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-05-29 00:58:23.212459 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-29 00:58:23.212469 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-05-29 00:58:23.212478 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-29 00:58:23.212487 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.212497 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-05-29 00:58:23.212506 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-05-29 00:58:23.212516 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.212525 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-05-29 00:58:23.212535 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-05-29 00:58:23.212544 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.212554 | orchestrator | 2025-05-29 00:58:23.212564 | orchestrator | TASK [ceph-facts : set_fact use_new_ceph_iscsi package or old ceph-iscsi-config/cli] *** 2025-05-29 00:58:23.212573 | orchestrator | Thursday 29 May 2025 00:46:23 +0000 (0:00:01.222) 0:01:08.162 ********** 2025-05-29 00:58:23.212583 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.212592 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.212602 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.212611 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.212620 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.212630 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.212639 | orchestrator | 2025-05-29 00:58:23.212649 | orchestrator | TASK [ceph-facts : set_fact ceph_run_cmd] ************************************** 2025-05-29 00:58:23.212658 | orchestrator | Thursday 29 May 2025 00:46:24 +0000 (0:00:00.941) 0:01:09.103 ********** 2025-05-29 00:58:23.212668 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-29 00:58:23.212677 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-29 00:58:23.212687 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-29 00:58:23.212697 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2025-05-29 00:58:23.212706 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-05-29 00:58:23.212716 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-05-29 00:58:23.212725 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-05-29 00:58:23.212735 | orchestrator | 2025-05-29 00:58:23.212744 | orchestrator | TASK [ceph-facts : set_fact ceph_admin_command] ******************************** 2025-05-29 00:58:23.212754 | orchestrator | Thursday 29 May 2025 00:46:25 +0000 (0:00:01.055) 0:01:10.159 ********** 2025-05-29 00:58:23.212764 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-29 00:58:23.212779 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-29 00:58:23.212789 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-29 00:58:23.212798 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2025-05-29 00:58:23.212808 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-05-29 00:58:23.212817 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-05-29 00:58:23.212833 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-05-29 00:58:23.212843 | orchestrator | 2025-05-29 00:58:23.212852 | orchestrator | TASK [ceph-handler : include check_running_containers.yml] ********************* 2025-05-29 00:58:23.212862 | orchestrator | Thursday 29 May 2025 00:46:27 +0000 (0:00:01.765) 0:01:11.925 ********** 2025-05-29 00:58:23.212872 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-29 00:58:23.212882 | orchestrator | 2025-05-29 00:58:23.212892 | orchestrator | TASK [ceph-handler : check for a mon container] ******************************** 2025-05-29 00:58:23.212902 | orchestrator | Thursday 29 May 2025 00:46:28 +0000 (0:00:01.111) 0:01:13.036 ********** 2025-05-29 00:58:23.212911 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:58:23.212921 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:58:23.212930 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.212940 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.212949 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.212959 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:58:23.212968 | orchestrator | 2025-05-29 00:58:23.212978 | orchestrator | TASK [ceph-handler : check for an osd container] ******************************* 2025-05-29 00:58:23.212988 | orchestrator | Thursday 29 May 2025 00:46:29 +0000 (0:00:01.128) 0:01:14.164 ********** 2025-05-29 00:58:23.212997 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.213007 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.213016 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.213030 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:58:23.213040 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:58:23.213049 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:58:23.213059 | orchestrator | 2025-05-29 00:58:23.213069 | orchestrator | TASK [ceph-handler : check for a mds container] ******************************** 2025-05-29 00:58:23.213078 | orchestrator | Thursday 29 May 2025 00:46:31 +0000 (0:00:01.609) 0:01:15.774 ********** 2025-05-29 00:58:23.213088 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.213097 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.213107 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.213116 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:58:23.213126 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:58:23.213135 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:58:23.213144 | orchestrator | 2025-05-29 00:58:23.213154 | orchestrator | TASK [ceph-handler : check for a rgw container] ******************************** 2025-05-29 00:58:23.213164 | orchestrator | Thursday 29 May 2025 00:46:32 +0000 (0:00:01.167) 0:01:16.942 ********** 2025-05-29 00:58:23.213173 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.213182 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.213192 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.213201 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:58:23.213273 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:58:23.213286 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:58:23.213296 | orchestrator | 2025-05-29 00:58:23.213306 | orchestrator | TASK [ceph-handler : check for a mgr container] ******************************** 2025-05-29 00:58:23.213315 | orchestrator | Thursday 29 May 2025 00:46:33 +0000 (0:00:01.152) 0:01:18.094 ********** 2025-05-29 00:58:23.213325 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:58:23.213334 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.213344 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:58:23.213353 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.213363 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:58:23.213372 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.213382 | orchestrator | 2025-05-29 00:58:23.213391 | orchestrator | TASK [ceph-handler : check for a rbd mirror container] ************************* 2025-05-29 00:58:23.213401 | orchestrator | Thursday 29 May 2025 00:46:34 +0000 (0:00:01.476) 0:01:19.571 ********** 2025-05-29 00:58:23.213411 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.213420 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.213437 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.213446 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.213453 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.213461 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.213469 | orchestrator | 2025-05-29 00:58:23.213477 | orchestrator | TASK [ceph-handler : check for a nfs container] ******************************** 2025-05-29 00:58:23.213485 | orchestrator | Thursday 29 May 2025 00:46:35 +0000 (0:00:00.849) 0:01:20.421 ********** 2025-05-29 00:58:23.213492 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.213500 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.213508 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.213516 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.213523 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.213531 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.213539 | orchestrator | 2025-05-29 00:58:23.213547 | orchestrator | TASK [ceph-handler : check for a tcmu-runner container] ************************ 2025-05-29 00:58:23.213554 | orchestrator | Thursday 29 May 2025 00:46:36 +0000 (0:00:00.898) 0:01:21.320 ********** 2025-05-29 00:58:23.213562 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.213570 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.213578 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.213586 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.213593 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.213601 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.213609 | orchestrator | 2025-05-29 00:58:23.213617 | orchestrator | TASK [ceph-handler : check for a rbd-target-api container] ********************* 2025-05-29 00:58:23.213624 | orchestrator | Thursday 29 May 2025 00:46:37 +0000 (0:00:01.101) 0:01:22.421 ********** 2025-05-29 00:58:23.213637 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.213645 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.213653 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.213661 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.213669 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.213677 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.213684 | orchestrator | 2025-05-29 00:58:23.213692 | orchestrator | TASK [ceph-handler : check for a rbd-target-gw container] ********************** 2025-05-29 00:58:23.213700 | orchestrator | Thursday 29 May 2025 00:46:38 +0000 (0:00:01.017) 0:01:23.439 ********** 2025-05-29 00:58:23.213708 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.213716 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.213724 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.213731 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.213739 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.213751 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.213764 | orchestrator | 2025-05-29 00:58:23.213784 | orchestrator | TASK [ceph-handler : check for a ceph-crash container] ************************* 2025-05-29 00:58:23.213800 | orchestrator | Thursday 29 May 2025 00:46:39 +0000 (0:00:00.630) 0:01:24.070 ********** 2025-05-29 00:58:23.213812 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:58:23.213825 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:58:23.213838 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:58:23.213848 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:58:23.213860 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:58:23.213871 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:58:23.213885 | orchestrator | 2025-05-29 00:58:23.213897 | orchestrator | TASK [ceph-handler : include check_socket_non_container.yml] ******************* 2025-05-29 00:58:23.213912 | orchestrator | Thursday 29 May 2025 00:46:41 +0000 (0:00:01.983) 0:01:26.053 ********** 2025-05-29 00:58:23.213927 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.213940 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.213955 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.213965 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.213973 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.213988 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.213996 | orchestrator | 2025-05-29 00:58:23.214004 | orchestrator | TASK [ceph-handler : set_fact handler_mon_status] ****************************** 2025-05-29 00:58:23.214012 | orchestrator | Thursday 29 May 2025 00:46:42 +0000 (0:00:00.656) 0:01:26.710 ********** 2025-05-29 00:58:23.214096 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:58:23.214111 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:58:23.214119 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:58:23.214127 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.214134 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.214142 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.214150 | orchestrator | 2025-05-29 00:58:23.214158 | orchestrator | TASK [ceph-handler : set_fact handler_osd_status] ****************************** 2025-05-29 00:58:23.214166 | orchestrator | Thursday 29 May 2025 00:46:42 +0000 (0:00:00.801) 0:01:27.511 ********** 2025-05-29 00:58:23.214174 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.214182 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.214189 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.214197 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:58:23.214205 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:58:23.214253 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:58:23.214262 | orchestrator | 2025-05-29 00:58:23.214270 | orchestrator | TASK [ceph-handler : set_fact handler_mds_status] ****************************** 2025-05-29 00:58:23.214279 | orchestrator | Thursday 29 May 2025 00:46:43 +0000 (0:00:00.769) 0:01:28.280 ********** 2025-05-29 00:58:23.214287 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.214295 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.214303 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.214311 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:58:23.214319 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:58:23.214327 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:58:23.214335 | orchestrator | 2025-05-29 00:58:23.214344 | orchestrator | TASK [ceph-handler : set_fact handler_rgw_status] ****************************** 2025-05-29 00:58:23.214352 | orchestrator | Thursday 29 May 2025 00:46:44 +0000 (0:00:00.868) 0:01:29.149 ********** 2025-05-29 00:58:23.214360 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.214368 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.214376 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.214384 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:58:23.214393 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:58:23.214401 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:58:23.214409 | orchestrator | 2025-05-29 00:58:23.214417 | orchestrator | TASK [ceph-handler : set_fact handler_nfs_status] ****************************** 2025-05-29 00:58:23.214425 | orchestrator | Thursday 29 May 2025 00:46:45 +0000 (0:00:00.743) 0:01:29.892 ********** 2025-05-29 00:58:23.214433 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.214442 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.214450 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.214458 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.214466 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.214474 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.214482 | orchestrator | 2025-05-29 00:58:23.214490 | orchestrator | TASK [ceph-handler : set_fact handler_rbd_status] ****************************** 2025-05-29 00:58:23.214499 | orchestrator | Thursday 29 May 2025 00:46:46 +0000 (0:00:00.847) 0:01:30.740 ********** 2025-05-29 00:58:23.214507 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.214515 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.214523 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.214531 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.214539 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.214547 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.214556 | orchestrator | 2025-05-29 00:58:23.214564 | orchestrator | TASK [ceph-handler : set_fact handler_mgr_status] ****************************** 2025-05-29 00:58:23.214572 | orchestrator | Thursday 29 May 2025 00:46:46 +0000 (0:00:00.599) 0:01:31.339 ********** 2025-05-29 00:58:23.214586 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:58:23.214595 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:58:23.214603 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:58:23.214611 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.214619 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.214631 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.214643 | orchestrator | 2025-05-29 00:58:23.214654 | orchestrator | TASK [ceph-handler : set_fact handler_crash_status] **************************** 2025-05-29 00:58:23.214671 | orchestrator | Thursday 29 May 2025 00:46:47 +0000 (0:00:00.847) 0:01:32.186 ********** 2025-05-29 00:58:23.214682 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:58:23.214693 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:58:23.214703 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:58:23.214714 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:58:23.214724 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:58:23.214735 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:58:23.214746 | orchestrator | 2025-05-29 00:58:23.214758 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-05-29 00:58:23.214768 | orchestrator | Thursday 29 May 2025 00:46:48 +0000 (0:00:00.924) 0:01:33.111 ********** 2025-05-29 00:58:23.214775 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.214782 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.214788 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.214795 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.214802 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.214808 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.214815 | orchestrator | 2025-05-29 00:58:23.214822 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-05-29 00:58:23.214828 | orchestrator | Thursday 29 May 2025 00:46:49 +0000 (0:00:01.067) 0:01:34.179 ********** 2025-05-29 00:58:23.214835 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.214842 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.214848 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.214855 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.214861 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.214868 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.214874 | orchestrator | 2025-05-29 00:58:23.214881 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-05-29 00:58:23.214888 | orchestrator | Thursday 29 May 2025 00:46:50 +0000 (0:00:00.832) 0:01:35.011 ********** 2025-05-29 00:58:23.214894 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.214901 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.214907 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.214914 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.214921 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.214927 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.214934 | orchestrator | 2025-05-29 00:58:23.214941 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-05-29 00:58:23.214952 | orchestrator | Thursday 29 May 2025 00:46:51 +0000 (0:00:00.842) 0:01:35.854 ********** 2025-05-29 00:58:23.214959 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.214965 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.214972 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.214978 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.214985 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.214992 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.214998 | orchestrator | 2025-05-29 00:58:23.215005 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-05-29 00:58:23.215012 | orchestrator | Thursday 29 May 2025 00:46:51 +0000 (0:00:00.626) 0:01:36.481 ********** 2025-05-29 00:58:23.215019 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.215025 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.215032 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.215044 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.215051 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.215057 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.215064 | orchestrator | 2025-05-29 00:58:23.215071 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-05-29 00:58:23.215077 | orchestrator | Thursday 29 May 2025 00:46:52 +0000 (0:00:00.892) 0:01:37.373 ********** 2025-05-29 00:58:23.215084 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.215091 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.215097 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.215104 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.215111 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.215117 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.215124 | orchestrator | 2025-05-29 00:58:23.215130 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-05-29 00:58:23.215137 | orchestrator | Thursday 29 May 2025 00:46:53 +0000 (0:00:00.596) 0:01:37.969 ********** 2025-05-29 00:58:23.215144 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.215150 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.215157 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.215164 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.215170 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.215177 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.215183 | orchestrator | 2025-05-29 00:58:23.215190 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-05-29 00:58:23.215197 | orchestrator | Thursday 29 May 2025 00:46:54 +0000 (0:00:00.844) 0:01:38.814 ********** 2025-05-29 00:58:23.215204 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.215223 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.215230 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.215237 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.215243 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.215250 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.215256 | orchestrator | 2025-05-29 00:58:23.215263 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-05-29 00:58:23.215270 | orchestrator | Thursday 29 May 2025 00:46:54 +0000 (0:00:00.614) 0:01:39.428 ********** 2025-05-29 00:58:23.215276 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.215283 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.215290 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.215296 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.215303 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.215309 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.215316 | orchestrator | 2025-05-29 00:58:23.215322 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-05-29 00:58:23.215329 | orchestrator | Thursday 29 May 2025 00:46:55 +0000 (0:00:00.866) 0:01:40.294 ********** 2025-05-29 00:58:23.215336 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.215343 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.215360 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.215367 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.215374 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.215380 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.215387 | orchestrator | 2025-05-29 00:58:23.215393 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-05-29 00:58:23.215400 | orchestrator | Thursday 29 May 2025 00:46:56 +0000 (0:00:00.644) 0:01:40.939 ********** 2025-05-29 00:58:23.215407 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.215413 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.215420 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.215426 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.215438 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.215444 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.215451 | orchestrator | 2025-05-29 00:58:23.215458 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-05-29 00:58:23.215464 | orchestrator | Thursday 29 May 2025 00:46:57 +0000 (0:00:01.157) 0:01:42.096 ********** 2025-05-29 00:58:23.215471 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.215478 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.215484 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.215490 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.215497 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.215504 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.215510 | orchestrator | 2025-05-29 00:58:23.215517 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-05-29 00:58:23.215523 | orchestrator | Thursday 29 May 2025 00:46:58 +0000 (0:00:00.778) 0:01:42.875 ********** 2025-05-29 00:58:23.215533 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-05-29 00:58:23.215548 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-05-29 00:58:23.215565 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.215576 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-05-29 00:58:23.215587 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-05-29 00:58:23.215599 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.215610 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-05-29 00:58:23.215622 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-05-29 00:58:23.215630 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.215642 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-05-29 00:58:23.215648 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-05-29 00:58:23.215655 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.215662 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-05-29 00:58:23.215668 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-05-29 00:58:23.215675 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.215682 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-05-29 00:58:23.215688 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-05-29 00:58:23.215695 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.215702 | orchestrator | 2025-05-29 00:58:23.215708 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-05-29 00:58:23.215715 | orchestrator | Thursday 29 May 2025 00:46:59 +0000 (0:00:01.427) 0:01:44.302 ********** 2025-05-29 00:58:23.215722 | orchestrator | skipping: [testbed-node-0] => (item=osd memory target)  2025-05-29 00:58:23.215728 | orchestrator | skipping: [testbed-node-0] => (item=osd_memory_target)  2025-05-29 00:58:23.215735 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.215742 | orchestrator | skipping: [testbed-node-1] => (item=osd memory target)  2025-05-29 00:58:23.215748 | orchestrator | skipping: [testbed-node-1] => (item=osd_memory_target)  2025-05-29 00:58:23.215755 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.215761 | orchestrator | skipping: [testbed-node-2] => (item=osd memory target)  2025-05-29 00:58:23.215768 | orchestrator | skipping: [testbed-node-2] => (item=osd_memory_target)  2025-05-29 00:58:23.215775 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.215781 | orchestrator | skipping: [testbed-node-3] => (item=osd memory target)  2025-05-29 00:58:23.215788 | orchestrator | skipping: [testbed-node-3] => (item=osd_memory_target)  2025-05-29 00:58:23.215794 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.215801 | orchestrator | skipping: [testbed-node-4] => (item=osd memory target)  2025-05-29 00:58:23.215808 | orchestrator | skipping: [testbed-node-4] => (item=osd_memory_target)  2025-05-29 00:58:23.215814 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.215821 | orchestrator | skipping: [testbed-node-5] => (item=osd memory target)  2025-05-29 00:58:23.215827 | orchestrator | skipping: [testbed-node-5] => (item=osd_memory_target)  2025-05-29 00:58:23.215840 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.215847 | orchestrator | 2025-05-29 00:58:23.215854 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-05-29 00:58:23.215860 | orchestrator | Thursday 29 May 2025 00:47:00 +0000 (0:00:00.994) 0:01:45.297 ********** 2025-05-29 00:58:23.215867 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.215874 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.215880 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.215887 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.215893 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.215900 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.215907 | orchestrator | 2025-05-29 00:58:23.215913 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-05-29 00:58:23.215920 | orchestrator | Thursday 29 May 2025 00:47:01 +0000 (0:00:00.851) 0:01:46.148 ********** 2025-05-29 00:58:23.215927 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.215933 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.215940 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.215946 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.215953 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.215959 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.215966 | orchestrator | 2025-05-29 00:58:23.215973 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-05-29 00:58:23.215986 | orchestrator | Thursday 29 May 2025 00:47:02 +0000 (0:00:00.625) 0:01:46.774 ********** 2025-05-29 00:58:23.215993 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.215999 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.216006 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.216012 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.216019 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.216026 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.216032 | orchestrator | 2025-05-29 00:58:23.216039 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-05-29 00:58:23.216046 | orchestrator | Thursday 29 May 2025 00:47:03 +0000 (0:00:00.894) 0:01:47.668 ********** 2025-05-29 00:58:23.216052 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.216059 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.216066 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.216072 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.216079 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.216085 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.216092 | orchestrator | 2025-05-29 00:58:23.216098 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-05-29 00:58:23.216105 | orchestrator | Thursday 29 May 2025 00:47:03 +0000 (0:00:00.751) 0:01:48.420 ********** 2025-05-29 00:58:23.216112 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.216118 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.216125 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.216131 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.216138 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.216145 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.216151 | orchestrator | 2025-05-29 00:58:23.216158 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-05-29 00:58:23.216165 | orchestrator | Thursday 29 May 2025 00:47:04 +0000 (0:00:00.993) 0:01:49.413 ********** 2025-05-29 00:58:23.216171 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.216178 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.216184 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.216191 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.216198 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.216204 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.216238 | orchestrator | 2025-05-29 00:58:23.216252 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-05-29 00:58:23.216269 | orchestrator | Thursday 29 May 2025 00:47:05 +0000 (0:00:00.723) 0:01:50.137 ********** 2025-05-29 00:58:23.216280 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-29 00:58:23.216290 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-29 00:58:23.216297 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-29 00:58:23.216304 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.216311 | orchestrator | 2025-05-29 00:58:23.216317 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-05-29 00:58:23.216324 | orchestrator | Thursday 29 May 2025 00:47:06 +0000 (0:00:00.822) 0:01:50.960 ********** 2025-05-29 00:58:23.216331 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-29 00:58:23.216337 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-29 00:58:23.216344 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-29 00:58:23.216350 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.216357 | orchestrator | 2025-05-29 00:58:23.216364 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-05-29 00:58:23.216370 | orchestrator | Thursday 29 May 2025 00:47:06 +0000 (0:00:00.429) 0:01:51.389 ********** 2025-05-29 00:58:23.216377 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-29 00:58:23.216384 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-29 00:58:23.216390 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-29 00:58:23.216397 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.216403 | orchestrator | 2025-05-29 00:58:23.216410 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-29 00:58:23.216417 | orchestrator | Thursday 29 May 2025 00:47:07 +0000 (0:00:00.429) 0:01:51.819 ********** 2025-05-29 00:58:23.216423 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.216430 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.216437 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.216443 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.216450 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.216456 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.216463 | orchestrator | 2025-05-29 00:58:23.216469 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-05-29 00:58:23.216476 | orchestrator | Thursday 29 May 2025 00:47:07 +0000 (0:00:00.615) 0:01:52.434 ********** 2025-05-29 00:58:23.216483 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-05-29 00:58:23.216489 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.216496 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-05-29 00:58:23.216502 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.216509 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-05-29 00:58:23.216516 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.216522 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-05-29 00:58:23.216529 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.216535 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-05-29 00:58:23.216542 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.216548 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-05-29 00:58:23.216555 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.216562 | orchestrator | 2025-05-29 00:58:23.216568 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-05-29 00:58:23.216575 | orchestrator | Thursday 29 May 2025 00:47:08 +0000 (0:00:01.170) 0:01:53.604 ********** 2025-05-29 00:58:23.216582 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.216588 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.216595 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.216601 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.216613 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.216620 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.216627 | orchestrator | 2025-05-29 00:58:23.216638 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-29 00:58:23.216645 | orchestrator | Thursday 29 May 2025 00:47:09 +0000 (0:00:00.598) 0:01:54.203 ********** 2025-05-29 00:58:23.216652 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.216658 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.216665 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.216672 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.216678 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.216684 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.216691 | orchestrator | 2025-05-29 00:58:23.216697 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-05-29 00:58:23.216704 | orchestrator | Thursday 29 May 2025 00:47:10 +0000 (0:00:00.861) 0:01:55.064 ********** 2025-05-29 00:58:23.216711 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-05-29 00:58:23.216717 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.216726 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-05-29 00:58:23.216741 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.216755 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-05-29 00:58:23.216766 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.216776 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-05-29 00:58:23.216787 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.216797 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-05-29 00:58:23.216807 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.216816 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-05-29 00:58:23.216826 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.216837 | orchestrator | 2025-05-29 00:58:23.216848 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-05-29 00:58:23.216860 | orchestrator | Thursday 29 May 2025 00:47:11 +0000 (0:00:00.908) 0:01:55.972 ********** 2025-05-29 00:58:23.216872 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.216883 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.216889 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.216896 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-05-29 00:58:23.216903 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.216918 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-05-29 00:58:23.216925 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.216932 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-05-29 00:58:23.216939 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.216945 | orchestrator | 2025-05-29 00:58:23.216952 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-05-29 00:58:23.216959 | orchestrator | Thursday 29 May 2025 00:47:12 +0000 (0:00:01.095) 0:01:57.068 ********** 2025-05-29 00:58:23.216965 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-29 00:58:23.216972 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-29 00:58:23.216979 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-29 00:58:23.216985 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.216992 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-05-29 00:58:23.216999 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-05-29 00:58:23.217005 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-05-29 00:58:23.217012 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.217019 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-05-29 00:58:23.217032 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-05-29 00:58:23.217039 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-05-29 00:58:23.217046 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.217052 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-29 00:58:23.217059 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-29 00:58:23.217066 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-29 00:58:23.217072 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.217079 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-05-29 00:58:23.217085 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-05-29 00:58:23.217092 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-05-29 00:58:23.217099 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.217105 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-05-29 00:58:23.217112 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-05-29 00:58:23.217119 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-05-29 00:58:23.217125 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.217132 | orchestrator | 2025-05-29 00:58:23.217139 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-05-29 00:58:23.217145 | orchestrator | Thursday 29 May 2025 00:47:14 +0000 (0:00:01.771) 0:01:58.840 ********** 2025-05-29 00:58:23.217152 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.217159 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.217166 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.217172 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.217179 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.217186 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.217192 | orchestrator | 2025-05-29 00:58:23.217199 | orchestrator | TASK [ceph-rgw : create rgw keyrings] ****************************************** 2025-05-29 00:58:23.217206 | orchestrator | Thursday 29 May 2025 00:47:15 +0000 (0:00:01.456) 0:02:00.296 ********** 2025-05-29 00:58:23.217253 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.217261 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.217274 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.217281 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-29 00:58:23.217288 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.217295 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-05-29 00:58:23.217302 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.217309 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-05-29 00:58:23.217315 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.217322 | orchestrator | 2025-05-29 00:58:23.217329 | orchestrator | TASK [ceph-rgw : include_tasks multisite] ************************************** 2025-05-29 00:58:23.217336 | orchestrator | Thursday 29 May 2025 00:47:17 +0000 (0:00:01.363) 0:02:01.660 ********** 2025-05-29 00:58:23.217342 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.217349 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.217355 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.217361 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.217368 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.217374 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.217380 | orchestrator | 2025-05-29 00:58:23.217387 | orchestrator | TASK [ceph-handler : set_fact multisite_called_from_handler_role] ************** 2025-05-29 00:58:23.217393 | orchestrator | Thursday 29 May 2025 00:47:18 +0000 (0:00:01.560) 0:02:03.221 ********** 2025-05-29 00:58:23.217399 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.217406 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.217412 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.217418 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.217424 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.217437 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.217444 | orchestrator | 2025-05-29 00:58:23.217450 | orchestrator | TASK [ceph-container-common : generate systemd ceph-mon target file] *********** 2025-05-29 00:58:23.217457 | orchestrator | Thursday 29 May 2025 00:47:19 +0000 (0:00:01.385) 0:02:04.606 ********** 2025-05-29 00:58:23.217463 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:58:23.217469 | orchestrator | changed: [testbed-node-1] 2025-05-29 00:58:23.217476 | orchestrator | changed: [testbed-node-2] 2025-05-29 00:58:23.217482 | orchestrator | changed: [testbed-node-3] 2025-05-29 00:58:23.217488 | orchestrator | changed: [testbed-node-4] 2025-05-29 00:58:23.217494 | orchestrator | changed: [testbed-node-5] 2025-05-29 00:58:23.217501 | orchestrator | 2025-05-29 00:58:23.217507 | orchestrator | TASK [ceph-container-common : enable ceph.target] ****************************** 2025-05-29 00:58:23.217517 | orchestrator | Thursday 29 May 2025 00:47:21 +0000 (0:00:01.610) 0:02:06.216 ********** 2025-05-29 00:58:23.217524 | orchestrator | changed: [testbed-node-4] 2025-05-29 00:58:23.217530 | orchestrator | changed: [testbed-node-2] 2025-05-29 00:58:23.217536 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:58:23.217543 | orchestrator | changed: [testbed-node-1] 2025-05-29 00:58:23.217549 | orchestrator | changed: [testbed-node-3] 2025-05-29 00:58:23.217555 | orchestrator | changed: [testbed-node-5] 2025-05-29 00:58:23.217562 | orchestrator | 2025-05-29 00:58:23.217568 | orchestrator | TASK [ceph-container-common : include prerequisites.yml] *********************** 2025-05-29 00:58:23.217574 | orchestrator | Thursday 29 May 2025 00:47:24 +0000 (0:00:02.793) 0:02:09.010 ********** 2025-05-29 00:58:23.217581 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/prerequisites.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-29 00:58:23.217588 | orchestrator | 2025-05-29 00:58:23.217595 | orchestrator | TASK [ceph-container-common : stop lvmetad] ************************************ 2025-05-29 00:58:23.217601 | orchestrator | Thursday 29 May 2025 00:47:25 +0000 (0:00:01.591) 0:02:10.601 ********** 2025-05-29 00:58:23.217608 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.217614 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.217620 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.217626 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.217633 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.217639 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.217645 | orchestrator | 2025-05-29 00:58:23.217652 | orchestrator | TASK [ceph-container-common : disable and mask lvmetad service] **************** 2025-05-29 00:58:23.217658 | orchestrator | Thursday 29 May 2025 00:47:26 +0000 (0:00:00.712) 0:02:11.314 ********** 2025-05-29 00:58:23.217664 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.217671 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.217677 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.217683 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.217689 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.217696 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.217702 | orchestrator | 2025-05-29 00:58:23.217709 | orchestrator | TASK [ceph-container-common : remove ceph udev rules] ************************** 2025-05-29 00:58:23.217715 | orchestrator | Thursday 29 May 2025 00:47:27 +0000 (0:00:00.875) 0:02:12.190 ********** 2025-05-29 00:58:23.217721 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-05-29 00:58:23.217728 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-05-29 00:58:23.217734 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-05-29 00:58:23.217740 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-05-29 00:58:23.217747 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-05-29 00:58:23.217753 | orchestrator | ok: [testbed-node-0] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-05-29 00:58:23.217760 | orchestrator | ok: [testbed-node-2] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-05-29 00:58:23.217771 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/95-ceph-osd.rules) 2025-05-29 00:58:23.217777 | orchestrator | ok: [testbed-node-1] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-05-29 00:58:23.217783 | orchestrator | ok: [testbed-node-3] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-05-29 00:58:23.217793 | orchestrator | ok: [testbed-node-4] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-05-29 00:58:23.217800 | orchestrator | ok: [testbed-node-5] => (item=/usr/lib/udev/rules.d/60-ceph-by-parttypeuuid.rules) 2025-05-29 00:58:23.217807 | orchestrator | 2025-05-29 00:58:23.217813 | orchestrator | TASK [ceph-container-common : ensure tmpfiles.d is present] ******************** 2025-05-29 00:58:23.217819 | orchestrator | Thursday 29 May 2025 00:47:28 +0000 (0:00:01.314) 0:02:13.504 ********** 2025-05-29 00:58:23.217826 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:58:23.217832 | orchestrator | changed: [testbed-node-1] 2025-05-29 00:58:23.217838 | orchestrator | changed: [testbed-node-2] 2025-05-29 00:58:23.217845 | orchestrator | changed: [testbed-node-3] 2025-05-29 00:58:23.217851 | orchestrator | changed: [testbed-node-4] 2025-05-29 00:58:23.217857 | orchestrator | changed: [testbed-node-5] 2025-05-29 00:58:23.217863 | orchestrator | 2025-05-29 00:58:23.217870 | orchestrator | TASK [ceph-container-common : restore certificates selinux context] ************ 2025-05-29 00:58:23.217876 | orchestrator | Thursday 29 May 2025 00:47:30 +0000 (0:00:01.281) 0:02:14.785 ********** 2025-05-29 00:58:23.217883 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.217889 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.217895 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.217901 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.217908 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.217914 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.217920 | orchestrator | 2025-05-29 00:58:23.217927 | orchestrator | TASK [ceph-container-common : include registry.yml] **************************** 2025-05-29 00:58:23.217933 | orchestrator | Thursday 29 May 2025 00:47:30 +0000 (0:00:00.617) 0:02:15.403 ********** 2025-05-29 00:58:23.217939 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.217946 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.217952 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.217958 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.217964 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.217974 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.217985 | orchestrator | 2025-05-29 00:58:23.217996 | orchestrator | TASK [ceph-container-common : include fetch_image.yml] ************************* 2025-05-29 00:58:23.218006 | orchestrator | Thursday 29 May 2025 00:47:31 +0000 (0:00:00.833) 0:02:16.237 ********** 2025-05-29 00:58:23.218047 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/fetch_image.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-29 00:58:23.218061 | orchestrator | 2025-05-29 00:58:23.218072 | orchestrator | TASK [ceph-container-common : pulling registry.osism.tech/osism/ceph-daemon:17.2.7 image] *** 2025-05-29 00:58:23.218081 | orchestrator | Thursday 29 May 2025 00:47:32 +0000 (0:00:01.203) 0:02:17.441 ********** 2025-05-29 00:58:23.218088 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:58:23.218095 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:58:23.218101 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:58:23.218107 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:58:23.218113 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:58:23.218119 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:58:23.218126 | orchestrator | 2025-05-29 00:58:23.218132 | orchestrator | TASK [ceph-container-common : pulling alertmanager/prometheus/grafana container images] *** 2025-05-29 00:58:23.218138 | orchestrator | Thursday 29 May 2025 00:48:18 +0000 (0:00:45.306) 0:03:02.747 ********** 2025-05-29 00:58:23.218144 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-05-29 00:58:23.218156 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/prom/prometheus:v2.7.2)  2025-05-29 00:58:23.218163 | orchestrator | skipping: [testbed-node-0] => (item=docker.io/grafana/grafana:6.7.4)  2025-05-29 00:58:23.218169 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.218175 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-05-29 00:58:23.218181 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/prom/prometheus:v2.7.2)  2025-05-29 00:58:23.218187 | orchestrator | skipping: [testbed-node-1] => (item=docker.io/grafana/grafana:6.7.4)  2025-05-29 00:58:23.218194 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.218200 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-05-29 00:58:23.218206 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/prom/prometheus:v2.7.2)  2025-05-29 00:58:23.218226 | orchestrator | skipping: [testbed-node-2] => (item=docker.io/grafana/grafana:6.7.4)  2025-05-29 00:58:23.218233 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.218240 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-05-29 00:58:23.218246 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/prom/prometheus:v2.7.2)  2025-05-29 00:58:23.218252 | orchestrator | skipping: [testbed-node-3] => (item=docker.io/grafana/grafana:6.7.4)  2025-05-29 00:58:23.218258 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.218264 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-05-29 00:58:23.218270 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/prom/prometheus:v2.7.2)  2025-05-29 00:58:23.218277 | orchestrator | skipping: [testbed-node-4] => (item=docker.io/grafana/grafana:6.7.4)  2025-05-29 00:58:23.218283 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.218289 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/alertmanager:v0.16.2)  2025-05-29 00:58:23.218295 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/prom/prometheus:v2.7.2)  2025-05-29 00:58:23.218301 | orchestrator | skipping: [testbed-node-5] => (item=docker.io/grafana/grafana:6.7.4)  2025-05-29 00:58:23.218307 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.218313 | orchestrator | 2025-05-29 00:58:23.218320 | orchestrator | TASK [ceph-container-common : pulling node-exporter container image] *********** 2025-05-29 00:58:23.218326 | orchestrator | Thursday 29 May 2025 00:48:18 +0000 (0:00:00.812) 0:03:03.559 ********** 2025-05-29 00:58:23.218332 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.218350 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.218357 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.218363 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.218369 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.218375 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.218381 | orchestrator | 2025-05-29 00:58:23.218387 | orchestrator | TASK [ceph-container-common : export local ceph dev image] ********************* 2025-05-29 00:58:23.218393 | orchestrator | Thursday 29 May 2025 00:48:19 +0000 (0:00:00.575) 0:03:04.135 ********** 2025-05-29 00:58:23.218400 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.218406 | orchestrator | 2025-05-29 00:58:23.218412 | orchestrator | TASK [ceph-container-common : copy ceph dev image file] ************************ 2025-05-29 00:58:23.218418 | orchestrator | Thursday 29 May 2025 00:48:19 +0000 (0:00:00.156) 0:03:04.291 ********** 2025-05-29 00:58:23.218424 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.218430 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.218436 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.218442 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.218448 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.218454 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.218460 | orchestrator | 2025-05-29 00:58:23.218467 | orchestrator | TASK [ceph-container-common : load ceph dev image] ***************************** 2025-05-29 00:58:23.218473 | orchestrator | Thursday 29 May 2025 00:48:20 +0000 (0:00:00.952) 0:03:05.244 ********** 2025-05-29 00:58:23.218484 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.218490 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.218496 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.218502 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.218509 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.218515 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.218521 | orchestrator | 2025-05-29 00:58:23.218527 | orchestrator | TASK [ceph-container-common : remove tmp ceph dev image file] ****************** 2025-05-29 00:58:23.218533 | orchestrator | Thursday 29 May 2025 00:48:21 +0000 (0:00:00.577) 0:03:05.822 ********** 2025-05-29 00:58:23.218539 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.218546 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.218552 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.218558 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.218564 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.218570 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.218576 | orchestrator | 2025-05-29 00:58:23.218585 | orchestrator | TASK [ceph-container-common : get ceph version] ******************************** 2025-05-29 00:58:23.218591 | orchestrator | Thursday 29 May 2025 00:48:21 +0000 (0:00:00.775) 0:03:06.597 ********** 2025-05-29 00:58:23.218598 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:58:23.218604 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:58:23.218610 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:58:23.218616 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:58:23.218622 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:58:23.218628 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:58:23.218634 | orchestrator | 2025-05-29 00:58:23.218640 | orchestrator | TASK [ceph-container-common : set_fact ceph_version ceph_version.stdout.split] *** 2025-05-29 00:58:23.218646 | orchestrator | Thursday 29 May 2025 00:48:23 +0000 (0:00:01.534) 0:03:08.132 ********** 2025-05-29 00:58:23.218652 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:58:23.218659 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:58:23.218665 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:58:23.218671 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:58:23.218677 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:58:23.218683 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:58:23.218689 | orchestrator | 2025-05-29 00:58:23.218695 | orchestrator | TASK [ceph-container-common : include release.yml] ***************************** 2025-05-29 00:58:23.218701 | orchestrator | Thursday 29 May 2025 00:48:24 +0000 (0:00:00.956) 0:03:09.088 ********** 2025-05-29 00:58:23.218708 | orchestrator | included: /ansible/roles/ceph-container-common/tasks/release.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-29 00:58:23.218715 | orchestrator | 2025-05-29 00:58:23.218721 | orchestrator | TASK [ceph-container-common : set_fact ceph_release jewel] ********************* 2025-05-29 00:58:23.218728 | orchestrator | Thursday 29 May 2025 00:48:25 +0000 (0:00:01.395) 0:03:10.484 ********** 2025-05-29 00:58:23.218734 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.218740 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.218746 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.218752 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.218758 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.218764 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.218770 | orchestrator | 2025-05-29 00:58:23.218776 | orchestrator | TASK [ceph-container-common : set_fact ceph_release kraken] ******************** 2025-05-29 00:58:23.218782 | orchestrator | Thursday 29 May 2025 00:48:26 +0000 (0:00:00.730) 0:03:11.214 ********** 2025-05-29 00:58:23.218788 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.218795 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.218801 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.218862 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.218893 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.218900 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.218912 | orchestrator | 2025-05-29 00:58:23.218919 | orchestrator | TASK [ceph-container-common : set_fact ceph_release luminous] ****************** 2025-05-29 00:58:23.218925 | orchestrator | Thursday 29 May 2025 00:48:27 +0000 (0:00:00.971) 0:03:12.186 ********** 2025-05-29 00:58:23.218931 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.218937 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.218943 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.218949 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.218955 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.218961 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.218967 | orchestrator | 2025-05-29 00:58:23.218974 | orchestrator | TASK [ceph-container-common : set_fact ceph_release mimic] ********************* 2025-05-29 00:58:23.218980 | orchestrator | Thursday 29 May 2025 00:48:28 +0000 (0:00:00.787) 0:03:12.974 ********** 2025-05-29 00:58:23.218986 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.218992 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.218998 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.219004 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.219015 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.219021 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.219027 | orchestrator | 2025-05-29 00:58:23.219034 | orchestrator | TASK [ceph-container-common : set_fact ceph_release nautilus] ****************** 2025-05-29 00:58:23.219045 | orchestrator | Thursday 29 May 2025 00:48:29 +0000 (0:00:01.055) 0:03:14.029 ********** 2025-05-29 00:58:23.219055 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.219066 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.219076 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.219087 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.219096 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.219105 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.219114 | orchestrator | 2025-05-29 00:58:23.219124 | orchestrator | TASK [ceph-container-common : set_fact ceph_release octopus] ******************* 2025-05-29 00:58:23.219134 | orchestrator | Thursday 29 May 2025 00:48:30 +0000 (0:00:00.640) 0:03:14.669 ********** 2025-05-29 00:58:23.219144 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.219154 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.219163 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.219172 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.219181 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.219191 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.219200 | orchestrator | 2025-05-29 00:58:23.219210 | orchestrator | TASK [ceph-container-common : set_fact ceph_release pacific] ******************* 2025-05-29 00:58:23.219236 | orchestrator | Thursday 29 May 2025 00:48:31 +0000 (0:00:01.069) 0:03:15.739 ********** 2025-05-29 00:58:23.219246 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.219256 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.219266 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.219276 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.219285 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.219296 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.219304 | orchestrator | 2025-05-29 00:58:23.219315 | orchestrator | TASK [ceph-container-common : set_fact ceph_release quincy] ******************** 2025-05-29 00:58:23.219325 | orchestrator | Thursday 29 May 2025 00:48:31 +0000 (0:00:00.656) 0:03:16.395 ********** 2025-05-29 00:58:23.219334 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:58:23.219345 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:58:23.219354 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:58:23.219364 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:58:23.219375 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:58:23.219390 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:58:23.219400 | orchestrator | 2025-05-29 00:58:23.219411 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-05-29 00:58:23.219421 | orchestrator | Thursday 29 May 2025 00:48:33 +0000 (0:00:01.324) 0:03:17.720 ********** 2025-05-29 00:58:23.219443 | orchestrator | included: /ansible/roles/ceph-config/tasks/create_ceph_initial_dirs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-29 00:58:23.219454 | orchestrator | 2025-05-29 00:58:23.219464 | orchestrator | TASK [ceph-config : create ceph initial directories] *************************** 2025-05-29 00:58:23.219475 | orchestrator | Thursday 29 May 2025 00:48:34 +0000 (0:00:01.343) 0:03:19.063 ********** 2025-05-29 00:58:23.219485 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph) 2025-05-29 00:58:23.219496 | orchestrator | changed: [testbed-node-1] => (item=/etc/ceph) 2025-05-29 00:58:23.219506 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/) 2025-05-29 00:58:23.219516 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/) 2025-05-29 00:58:23.219526 | orchestrator | changed: [testbed-node-2] => (item=/etc/ceph) 2025-05-29 00:58:23.219536 | orchestrator | changed: [testbed-node-3] => (item=/etc/ceph) 2025-05-29 00:58:23.219546 | orchestrator | changed: [testbed-node-4] => (item=/etc/ceph) 2025-05-29 00:58:23.219557 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mon) 2025-05-29 00:58:23.219567 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mon) 2025-05-29 00:58:23.219576 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/) 2025-05-29 00:58:23.219586 | orchestrator | changed: [testbed-node-5] => (item=/etc/ceph) 2025-05-29 00:58:23.219595 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/) 2025-05-29 00:58:23.219605 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/) 2025-05-29 00:58:23.219614 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/osd) 2025-05-29 00:58:23.219624 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/osd) 2025-05-29 00:58:23.219632 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/) 2025-05-29 00:58:23.219643 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mon) 2025-05-29 00:58:23.219653 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mon) 2025-05-29 00:58:23.219662 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mon) 2025-05-29 00:58:23.219672 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/mds) 2025-05-29 00:58:23.219682 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/mds) 2025-05-29 00:58:23.219692 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mon) 2025-05-29 00:58:23.219702 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/osd) 2025-05-29 00:58:23.219713 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/osd) 2025-05-29 00:58:23.219723 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/osd) 2025-05-29 00:58:23.219734 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/tmp) 2025-05-29 00:58:23.219743 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/tmp) 2025-05-29 00:58:23.219753 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/osd) 2025-05-29 00:58:23.219764 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/mds) 2025-05-29 00:58:23.219774 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds) 2025-05-29 00:58:23.219785 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds) 2025-05-29 00:58:23.219807 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/radosgw) 2025-05-29 00:58:23.219815 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds) 2025-05-29 00:58:23.219821 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/tmp) 2025-05-29 00:58:23.219828 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/radosgw) 2025-05-29 00:58:23.219834 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/tmp) 2025-05-29 00:58:23.219840 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/tmp) 2025-05-29 00:58:23.219846 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rgw) 2025-05-29 00:58:23.219852 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/tmp) 2025-05-29 00:58:23.219865 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/radosgw) 2025-05-29 00:58:23.219871 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw) 2025-05-29 00:58:23.219877 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/radosgw) 2025-05-29 00:58:23.219883 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/radosgw) 2025-05-29 00:58:23.219890 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mgr) 2025-05-29 00:58:23.219896 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/radosgw) 2025-05-29 00:58:23.219902 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rgw) 2025-05-29 00:58:23.219908 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr) 2025-05-29 00:58:23.219914 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rgw) 2025-05-29 00:58:23.219920 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rgw) 2025-05-29 00:58:23.219926 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-mds) 2025-05-29 00:58:23.219932 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rgw) 2025-05-29 00:58:23.219938 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mgr) 2025-05-29 00:58:23.219944 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds) 2025-05-29 00:58:23.219955 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mgr) 2025-05-29 00:58:23.219962 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mgr) 2025-05-29 00:58:23.219968 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mgr) 2025-05-29 00:58:23.219974 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-osd) 2025-05-29 00:58:23.219980 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-mds) 2025-05-29 00:58:23.219986 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd) 2025-05-29 00:58:23.219992 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds) 2025-05-29 00:58:23.219998 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds) 2025-05-29 00:58:23.220004 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds) 2025-05-29 00:58:23.220010 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd) 2025-05-29 00:58:23.220016 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-osd) 2025-05-29 00:58:23.220022 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd) 2025-05-29 00:58:23.220028 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd) 2025-05-29 00:58:23.220034 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd) 2025-05-29 00:58:23.220040 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd) 2025-05-29 00:58:23.220046 | orchestrator | changed: [testbed-node-1] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-05-29 00:58:23.220052 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd) 2025-05-29 00:58:23.220058 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-05-29 00:58:23.220064 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd) 2025-05-29 00:58:23.220070 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd) 2025-05-29 00:58:23.220077 | orchestrator | changed: [testbed-node-1] => (item=/var/run/ceph) 2025-05-29 00:58:23.220088 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd) 2025-05-29 00:58:23.220098 | orchestrator | changed: [testbed-node-2] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-05-29 00:58:23.220108 | orchestrator | changed: [testbed-node-0] => (item=/var/run/ceph) 2025-05-29 00:58:23.220120 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-05-29 00:58:23.220130 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-05-29 00:58:23.220147 | orchestrator | changed: [testbed-node-1] => (item=/var/log/ceph) 2025-05-29 00:58:23.220205 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-rbd-mirror) 2025-05-29 00:58:23.220227 | orchestrator | changed: [testbed-node-2] => (item=/var/run/ceph) 2025-05-29 00:58:23.220236 | orchestrator | changed: [testbed-node-0] => (item=/var/log/ceph) 2025-05-29 00:58:23.220242 | orchestrator | changed: [testbed-node-4] => (item=/var/run/ceph) 2025-05-29 00:58:23.220249 | orchestrator | changed: [testbed-node-3] => (item=/var/run/ceph) 2025-05-29 00:58:23.220255 | orchestrator | changed: [testbed-node-5] => (item=/var/run/ceph) 2025-05-29 00:58:23.220261 | orchestrator | changed: [testbed-node-2] => (item=/var/log/ceph) 2025-05-29 00:58:23.220267 | orchestrator | changed: [testbed-node-4] => (item=/var/log/ceph) 2025-05-29 00:58:23.220274 | orchestrator | changed: [testbed-node-3] => (item=/var/log/ceph) 2025-05-29 00:58:23.220285 | orchestrator | changed: [testbed-node-5] => (item=/var/log/ceph) 2025-05-29 00:58:23.220294 | orchestrator | 2025-05-29 00:58:23.220305 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-05-29 00:58:23.220315 | orchestrator | Thursday 29 May 2025 00:48:40 +0000 (0:00:06.082) 0:03:25.146 ********** 2025-05-29 00:58:23.220325 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.220334 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.220344 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.220353 | orchestrator | included: /ansible/roles/ceph-config/tasks/rgw_systemd_environment_file.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-29 00:58:23.220364 | orchestrator | 2025-05-29 00:58:23.220374 | orchestrator | TASK [ceph-config : create rados gateway instance directories] ***************** 2025-05-29 00:58:23.220385 | orchestrator | Thursday 29 May 2025 00:48:41 +0000 (0:00:01.263) 0:03:26.409 ********** 2025-05-29 00:58:23.220396 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-05-29 00:58:23.220408 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-05-29 00:58:23.220418 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-05-29 00:58:23.220429 | orchestrator | 2025-05-29 00:58:23.220436 | orchestrator | TASK [ceph-config : generate environment file] ********************************* 2025-05-29 00:58:23.220442 | orchestrator | Thursday 29 May 2025 00:48:42 +0000 (0:00:01.175) 0:03:27.584 ********** 2025-05-29 00:58:23.220452 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-05-29 00:58:23.220463 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-05-29 00:58:23.220485 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-05-29 00:58:23.220496 | orchestrator | 2025-05-29 00:58:23.220507 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-05-29 00:58:23.220555 | orchestrator | Thursday 29 May 2025 00:48:44 +0000 (0:00:01.234) 0:03:28.818 ********** 2025-05-29 00:58:23.220568 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.220579 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.220589 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.220600 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:58:23.220610 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:58:23.220620 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:58:23.220632 | orchestrator | 2025-05-29 00:58:23.220642 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-05-29 00:58:23.220653 | orchestrator | Thursday 29 May 2025 00:48:45 +0000 (0:00:00.957) 0:03:29.776 ********** 2025-05-29 00:58:23.220667 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.220673 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.220679 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.220685 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:58:23.220691 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:58:23.220698 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:58:23.220704 | orchestrator | 2025-05-29 00:58:23.220710 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-05-29 00:58:23.220716 | orchestrator | Thursday 29 May 2025 00:48:45 +0000 (0:00:00.719) 0:03:30.496 ********** 2025-05-29 00:58:23.220723 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.220729 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.220735 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.220742 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.220748 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.220754 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.220760 | orchestrator | 2025-05-29 00:58:23.220767 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-05-29 00:58:23.220773 | orchestrator | Thursday 29 May 2025 00:48:46 +0000 (0:00:00.849) 0:03:31.345 ********** 2025-05-29 00:58:23.220779 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.220786 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.220796 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.220805 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.220814 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.220830 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.220842 | orchestrator | 2025-05-29 00:58:23.220852 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-05-29 00:58:23.220861 | orchestrator | Thursday 29 May 2025 00:48:47 +0000 (0:00:00.608) 0:03:31.954 ********** 2025-05-29 00:58:23.220871 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.220881 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.220891 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.220899 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.220908 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.220917 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.220926 | orchestrator | 2025-05-29 00:58:23.220935 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-05-29 00:58:23.220945 | orchestrator | Thursday 29 May 2025 00:48:48 +0000 (0:00:00.854) 0:03:32.809 ********** 2025-05-29 00:58:23.220955 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.220965 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.220974 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.220983 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.220993 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.221003 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.221013 | orchestrator | 2025-05-29 00:58:23.221024 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-05-29 00:58:23.221042 | orchestrator | Thursday 29 May 2025 00:48:48 +0000 (0:00:00.619) 0:03:33.429 ********** 2025-05-29 00:58:23.221053 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.221063 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.221074 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.221084 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.221093 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.221103 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.221114 | orchestrator | 2025-05-29 00:58:23.221124 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-05-29 00:58:23.221135 | orchestrator | Thursday 29 May 2025 00:48:49 +0000 (0:00:00.940) 0:03:34.369 ********** 2025-05-29 00:58:23.221145 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.221156 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.221174 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.221185 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.221196 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.221205 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.221252 | orchestrator | 2025-05-29 00:58:23.221261 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-05-29 00:58:23.221267 | orchestrator | Thursday 29 May 2025 00:48:50 +0000 (0:00:00.902) 0:03:35.271 ********** 2025-05-29 00:58:23.221274 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.221280 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.221286 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.221292 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:58:23.221298 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:58:23.221305 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:58:23.221311 | orchestrator | 2025-05-29 00:58:23.221317 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-05-29 00:58:23.221324 | orchestrator | Thursday 29 May 2025 00:48:53 +0000 (0:00:02.400) 0:03:37.671 ********** 2025-05-29 00:58:23.221330 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.221336 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.221342 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.221349 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:58:23.221355 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:58:23.221361 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:58:23.221367 | orchestrator | 2025-05-29 00:58:23.221379 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-05-29 00:58:23.221386 | orchestrator | Thursday 29 May 2025 00:48:53 +0000 (0:00:00.706) 0:03:38.378 ********** 2025-05-29 00:58:23.221392 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-05-29 00:58:23.221398 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-05-29 00:58:23.221404 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.221411 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-05-29 00:58:23.221417 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-05-29 00:58:23.221423 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.221429 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-05-29 00:58:23.221435 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-05-29 00:58:23.221441 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.221447 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-05-29 00:58:23.221454 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-05-29 00:58:23.221460 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.221466 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-05-29 00:58:23.221472 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-05-29 00:58:23.221478 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.221485 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-05-29 00:58:23.221491 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-05-29 00:58:23.221497 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.221503 | orchestrator | 2025-05-29 00:58:23.221509 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-05-29 00:58:23.221515 | orchestrator | Thursday 29 May 2025 00:48:54 +0000 (0:00:01.031) 0:03:39.409 ********** 2025-05-29 00:58:23.221522 | orchestrator | skipping: [testbed-node-0] => (item=osd memory target)  2025-05-29 00:58:23.221528 | orchestrator | skipping: [testbed-node-0] => (item=osd_memory_target)  2025-05-29 00:58:23.221536 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.221549 | orchestrator | skipping: [testbed-node-1] => (item=osd memory target)  2025-05-29 00:58:23.221564 | orchestrator | skipping: [testbed-node-1] => (item=osd_memory_target)  2025-05-29 00:58:23.221574 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.221584 | orchestrator | skipping: [testbed-node-2] => (item=osd memory target)  2025-05-29 00:58:23.221594 | orchestrator | skipping: [testbed-node-2] => (item=osd_memory_target)  2025-05-29 00:58:23.221612 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.221624 | orchestrator | ok: [testbed-node-3] => (item=osd memory target) 2025-05-29 00:58:23.221631 | orchestrator | ok: [testbed-node-3] => (item=osd_memory_target) 2025-05-29 00:58:23.221638 | orchestrator | ok: [testbed-node-4] => (item=osd memory target) 2025-05-29 00:58:23.221644 | orchestrator | ok: [testbed-node-4] => (item=osd_memory_target) 2025-05-29 00:58:23.221650 | orchestrator | ok: [testbed-node-5] => (item=osd memory target) 2025-05-29 00:58:23.221656 | orchestrator | ok: [testbed-node-5] => (item=osd_memory_target) 2025-05-29 00:58:23.221662 | orchestrator | 2025-05-29 00:58:23.221669 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-05-29 00:58:23.221675 | orchestrator | Thursday 29 May 2025 00:48:55 +0000 (0:00:00.830) 0:03:40.239 ********** 2025-05-29 00:58:23.221681 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.221687 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.221693 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.221700 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:58:23.221706 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:58:23.221712 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:58:23.221718 | orchestrator | 2025-05-29 00:58:23.221725 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-05-29 00:58:23.221731 | orchestrator | Thursday 29 May 2025 00:48:56 +0000 (0:00:01.022) 0:03:41.262 ********** 2025-05-29 00:58:23.221737 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.221749 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.221756 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.221762 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.221768 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.221774 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.221780 | orchestrator | 2025-05-29 00:58:23.221786 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-05-29 00:58:23.221793 | orchestrator | Thursday 29 May 2025 00:48:57 +0000 (0:00:00.626) 0:03:41.888 ********** 2025-05-29 00:58:23.221799 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.221804 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.221809 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.221815 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.221820 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.221825 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.221830 | orchestrator | 2025-05-29 00:58:23.221836 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-05-29 00:58:23.221841 | orchestrator | Thursday 29 May 2025 00:48:58 +0000 (0:00:00.899) 0:03:42.787 ********** 2025-05-29 00:58:23.221846 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.221852 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.221857 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.221862 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.221867 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.221872 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.221878 | orchestrator | 2025-05-29 00:58:23.221883 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-05-29 00:58:23.221888 | orchestrator | Thursday 29 May 2025 00:48:58 +0000 (0:00:00.648) 0:03:43.436 ********** 2025-05-29 00:58:23.221894 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.221899 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.221904 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.221909 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.221914 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.221920 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.221925 | orchestrator | 2025-05-29 00:58:23.221930 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-05-29 00:58:23.221944 | orchestrator | Thursday 29 May 2025 00:48:59 +0000 (0:00:00.944) 0:03:44.380 ********** 2025-05-29 00:58:23.221950 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.221955 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.221960 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.221966 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:58:23.221971 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:58:23.221976 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:58:23.221982 | orchestrator | 2025-05-29 00:58:23.221987 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-05-29 00:58:23.221992 | orchestrator | Thursday 29 May 2025 00:49:00 +0000 (0:00:00.812) 0:03:45.193 ********** 2025-05-29 00:58:23.221998 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-29 00:58:23.222003 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-29 00:58:23.222009 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-29 00:58:23.222014 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.222167 | orchestrator | 2025-05-29 00:58:23.222174 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-05-29 00:58:23.222179 | orchestrator | Thursday 29 May 2025 00:49:00 +0000 (0:00:00.404) 0:03:45.597 ********** 2025-05-29 00:58:23.222184 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-29 00:58:23.222190 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-29 00:58:23.222195 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-29 00:58:23.222200 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.222206 | orchestrator | 2025-05-29 00:58:23.222226 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-05-29 00:58:23.222233 | orchestrator | Thursday 29 May 2025 00:49:01 +0000 (0:00:00.683) 0:03:46.281 ********** 2025-05-29 00:58:23.222238 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-29 00:58:23.222244 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-29 00:58:23.222249 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-29 00:58:23.222255 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.222260 | orchestrator | 2025-05-29 00:58:23.222266 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-29 00:58:23.222271 | orchestrator | Thursday 29 May 2025 00:49:02 +0000 (0:00:00.890) 0:03:47.171 ********** 2025-05-29 00:58:23.222276 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.222282 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.222287 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.222292 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:58:23.222298 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:58:23.222303 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:58:23.222309 | orchestrator | 2025-05-29 00:58:23.222314 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-05-29 00:58:23.222319 | orchestrator | Thursday 29 May 2025 00:49:03 +0000 (0:00:00.755) 0:03:47.927 ********** 2025-05-29 00:58:23.222325 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-05-29 00:58:23.222330 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.222335 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-05-29 00:58:23.222341 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.222346 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-05-29 00:58:23.222351 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.222357 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-05-29 00:58:23.222362 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-05-29 00:58:23.222367 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-05-29 00:58:23.222373 | orchestrator | 2025-05-29 00:58:23.222378 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-05-29 00:58:23.222384 | orchestrator | Thursday 29 May 2025 00:49:05 +0000 (0:00:01.824) 0:03:49.751 ********** 2025-05-29 00:58:23.222389 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.222445 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.222453 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.222459 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.222465 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.222470 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.222475 | orchestrator | 2025-05-29 00:58:23.222481 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-29 00:58:23.222486 | orchestrator | Thursday 29 May 2025 00:49:05 +0000 (0:00:00.567) 0:03:50.319 ********** 2025-05-29 00:58:23.222491 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.222497 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.222502 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.222507 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.222512 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.222518 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.222523 | orchestrator | 2025-05-29 00:58:23.222528 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-05-29 00:58:23.222534 | orchestrator | Thursday 29 May 2025 00:49:06 +0000 (0:00:00.755) 0:03:51.075 ********** 2025-05-29 00:58:23.222539 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-05-29 00:58:23.222544 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.222549 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-05-29 00:58:23.222555 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-05-29 00:58:23.222560 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.222565 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-05-29 00:58:23.222571 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.222576 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.222581 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-05-29 00:58:23.222586 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.222592 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-05-29 00:58:23.222597 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.222602 | orchestrator | 2025-05-29 00:58:23.222607 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-05-29 00:58:23.222613 | orchestrator | Thursday 29 May 2025 00:49:07 +0000 (0:00:00.742) 0:03:51.817 ********** 2025-05-29 00:58:23.222618 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.222623 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.222633 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.222639 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-05-29 00:58:23.222644 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.222649 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-05-29 00:58:23.222655 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.222660 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-05-29 00:58:23.222665 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.222671 | orchestrator | 2025-05-29 00:58:23.222681 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-05-29 00:58:23.222690 | orchestrator | Thursday 29 May 2025 00:49:07 +0000 (0:00:00.791) 0:03:52.609 ********** 2025-05-29 00:58:23.222704 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-29 00:58:23.222717 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-29 00:58:23.222725 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-29 00:58:23.222733 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.222742 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-05-29 00:58:23.222750 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-05-29 00:58:23.222765 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-05-29 00:58:23.222773 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.222782 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-05-29 00:58:23.222791 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-05-29 00:58:23.222800 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-05-29 00:58:23.222809 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.222818 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-29 00:58:23.222826 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-29 00:58:23.222832 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-05-29 00:58:23.222837 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-29 00:58:23.222843 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.222848 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-05-29 00:58:23.222871 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-05-29 00:58:23.222876 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-05-29 00:58:23.222882 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-05-29 00:58:23.222887 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.222896 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-05-29 00:58:23.222904 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.222912 | orchestrator | 2025-05-29 00:58:23.222921 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-05-29 00:58:23.222929 | orchestrator | Thursday 29 May 2025 00:49:09 +0000 (0:00:01.471) 0:03:54.080 ********** 2025-05-29 00:58:23.222938 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:58:23.222946 | orchestrator | changed: [testbed-node-1] 2025-05-29 00:58:23.222955 | orchestrator | changed: [testbed-node-3] 2025-05-29 00:58:23.222961 | orchestrator | changed: [testbed-node-2] 2025-05-29 00:58:23.222966 | orchestrator | changed: [testbed-node-4] 2025-05-29 00:58:23.222972 | orchestrator | changed: [testbed-node-5] 2025-05-29 00:58:23.222977 | orchestrator | 2025-05-29 00:58:23.223041 | orchestrator | RUNNING HANDLER [ceph-handler : make tempdir for scripts] ********************** 2025-05-29 00:58:23.223049 | orchestrator | Thursday 29 May 2025 00:49:14 +0000 (0:00:05.034) 0:03:59.115 ********** 2025-05-29 00:58:23.223055 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:58:23.223060 | orchestrator | changed: [testbed-node-1] 2025-05-29 00:58:23.223066 | orchestrator | changed: [testbed-node-2] 2025-05-29 00:58:23.223071 | orchestrator | changed: [testbed-node-3] 2025-05-29 00:58:23.223076 | orchestrator | changed: [testbed-node-4] 2025-05-29 00:58:23.223082 | orchestrator | changed: [testbed-node-5] 2025-05-29 00:58:23.223088 | orchestrator | 2025-05-29 00:58:23.223093 | orchestrator | RUNNING HANDLER [ceph-handler : mons handler] ********************************** 2025-05-29 00:58:23.223099 | orchestrator | Thursday 29 May 2025 00:49:15 +0000 (0:00:01.119) 0:04:00.235 ********** 2025-05-29 00:58:23.223104 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.223110 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.223115 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.223121 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-29 00:58:23.223126 | orchestrator | 2025-05-29 00:58:23.223132 | orchestrator | RUNNING HANDLER [ceph-handler : set _mon_handler_called before restart] ******** 2025-05-29 00:58:23.223137 | orchestrator | Thursday 29 May 2025 00:49:16 +0000 (0:00:00.893) 0:04:01.129 ********** 2025-05-29 00:58:23.223143 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:58:23.223149 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:58:23.223154 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:58:23.223160 | orchestrator | 2025-05-29 00:58:23.223165 | orchestrator | TASK [ceph-handler : set _mon_handler_called before restart] ******************* 2025-05-29 00:58:23.223171 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-29 00:58:23.223182 | orchestrator | 2025-05-29 00:58:23.223188 | orchestrator | RUNNING HANDLER [ceph-handler : copy mon restart script] *********************** 2025-05-29 00:58:23.223194 | orchestrator | Thursday 29 May 2025 00:49:17 +0000 (0:00:00.894) 0:04:02.024 ********** 2025-05-29 00:58:23.223199 | orchestrator | 2025-05-29 00:58:23.223205 | orchestrator | TASK [ceph-handler : copy mon restart script] ********************************** 2025-05-29 00:58:23.223210 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-29 00:58:23.223238 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-29 00:58:23.223244 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-29 00:58:23.223250 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.223255 | orchestrator | 2025-05-29 00:58:23.223261 | orchestrator | RUNNING HANDLER [ceph-handler : copy mon restart script] *********************** 2025-05-29 00:58:23.223266 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:58:23.223272 | orchestrator | changed: [testbed-node-1] 2025-05-29 00:58:23.223277 | orchestrator | changed: [testbed-node-2] 2025-05-29 00:58:23.223282 | orchestrator | 2025-05-29 00:58:23.223288 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph mon daemon(s)] ******************** 2025-05-29 00:58:23.223293 | orchestrator | Thursday 29 May 2025 00:49:18 +0000 (0:00:01.178) 0:04:03.202 ********** 2025-05-29 00:58:23.223299 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-29 00:58:23.223304 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-29 00:58:23.223310 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-29 00:58:23.223315 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.223321 | orchestrator | 2025-05-29 00:58:23.223326 | orchestrator | RUNNING HANDLER [ceph-handler : set _mon_handler_called after restart] ********* 2025-05-29 00:58:23.223331 | orchestrator | Thursday 29 May 2025 00:49:19 +0000 (0:00:00.716) 0:04:03.918 ********** 2025-05-29 00:58:23.223337 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:58:23.223342 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:58:23.223348 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:58:23.223353 | orchestrator | 2025-05-29 00:58:23.223359 | orchestrator | TASK [ceph-handler : set _mon_handler_called after restart] ******************** 2025-05-29 00:58:23.223364 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.223369 | orchestrator | 2025-05-29 00:58:23.223375 | orchestrator | RUNNING HANDLER [ceph-handler : osds handler] ********************************** 2025-05-29 00:58:23.223380 | orchestrator | Thursday 29 May 2025 00:49:19 +0000 (0:00:00.611) 0:04:04.530 ********** 2025-05-29 00:58:23.223386 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.223391 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.223397 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.223402 | orchestrator | 2025-05-29 00:58:23.223407 | orchestrator | TASK [ceph-handler : osds handler] ********************************************* 2025-05-29 00:58:23.223413 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.223418 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.223424 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.223429 | orchestrator | 2025-05-29 00:58:23.223435 | orchestrator | RUNNING HANDLER [ceph-handler : mdss handler] ********************************** 2025-05-29 00:58:23.223440 | orchestrator | Thursday 29 May 2025 00:49:20 +0000 (0:00:00.583) 0:04:05.113 ********** 2025-05-29 00:58:23.223445 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.223451 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.223456 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.223462 | orchestrator | 2025-05-29 00:58:23.223467 | orchestrator | TASK [ceph-handler : mdss handler] ********************************************* 2025-05-29 00:58:23.223472 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.223478 | orchestrator | 2025-05-29 00:58:23.223483 | orchestrator | RUNNING HANDLER [ceph-handler : rgws handler] ********************************** 2025-05-29 00:58:23.223489 | orchestrator | Thursday 29 May 2025 00:49:20 +0000 (0:00:00.511) 0:04:05.624 ********** 2025-05-29 00:58:23.223498 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.223504 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.223509 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.223515 | orchestrator | 2025-05-29 00:58:23.223520 | orchestrator | TASK [ceph-handler : rgws handler] ********************************************* 2025-05-29 00:58:23.223525 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.223531 | orchestrator | 2025-05-29 00:58:23.223536 | orchestrator | RUNNING HANDLER [ceph-handler : set_fact pools_pgautoscaler_mode] ************** 2025-05-29 00:58:23.223542 | orchestrator | Thursday 29 May 2025 00:49:21 +0000 (0:00:00.886) 0:04:06.511 ********** 2025-05-29 00:58:23.223547 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.223553 | orchestrator | 2025-05-29 00:58:23.223598 | orchestrator | RUNNING HANDLER [ceph-handler : rbdmirrors handler] **************************** 2025-05-29 00:58:23.223605 | orchestrator | Thursday 29 May 2025 00:49:22 +0000 (0:00:00.144) 0:04:06.655 ********** 2025-05-29 00:58:23.223611 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.223616 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.223621 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.223627 | orchestrator | 2025-05-29 00:58:23.223632 | orchestrator | TASK [ceph-handler : rbdmirrors handler] *************************************** 2025-05-29 00:58:23.223638 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.223643 | orchestrator | 2025-05-29 00:58:23.223649 | orchestrator | RUNNING HANDLER [ceph-handler : mgrs handler] ********************************** 2025-05-29 00:58:23.223654 | orchestrator | Thursday 29 May 2025 00:49:22 +0000 (0:00:00.511) 0:04:07.166 ********** 2025-05-29 00:58:23.223659 | orchestrator | 2025-05-29 00:58:23.223665 | orchestrator | TASK [ceph-handler : mgrs handler] ********************************************* 2025-05-29 00:58:23.223670 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.223676 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-29 00:58:23.223681 | orchestrator | 2025-05-29 00:58:23.223687 | orchestrator | RUNNING HANDLER [ceph-handler : set _mgr_handler_called before restart] ******** 2025-05-29 00:58:23.223692 | orchestrator | Thursday 29 May 2025 00:49:23 +0000 (0:00:01.169) 0:04:08.336 ********** 2025-05-29 00:58:23.223698 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:58:23.223703 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:58:23.223708 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:58:23.223714 | orchestrator | 2025-05-29 00:58:23.223719 | orchestrator | TASK [ceph-handler : set _mgr_handler_called before restart] ******************* 2025-05-29 00:58:23.223725 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-29 00:58:23.223730 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-29 00:58:23.223736 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-29 00:58:23.223741 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.223746 | orchestrator | 2025-05-29 00:58:23.223752 | orchestrator | RUNNING HANDLER [ceph-handler : copy mgr restart script] *********************** 2025-05-29 00:58:23.223757 | orchestrator | Thursday 29 May 2025 00:49:24 +0000 (0:00:01.070) 0:04:09.407 ********** 2025-05-29 00:58:23.223762 | orchestrator | 2025-05-29 00:58:23.223771 | orchestrator | TASK [ceph-handler : copy mgr restart script] ********************************** 2025-05-29 00:58:23.223777 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.223782 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.223788 | orchestrator | 2025-05-29 00:58:23.223793 | orchestrator | RUNNING HANDLER [ceph-handler : copy mgr restart script] *********************** 2025-05-29 00:58:23.223799 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:58:23.223804 | orchestrator | 2025-05-29 00:58:23.223809 | orchestrator | TASK [ceph-handler : copy mgr restart script] ********************************** 2025-05-29 00:58:23.223815 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.223820 | orchestrator | 2025-05-29 00:58:23.223826 | orchestrator | RUNNING HANDLER [ceph-handler : copy mgr restart script] *********************** 2025-05-29 00:58:23.223831 | orchestrator | changed: [testbed-node-1] 2025-05-29 00:58:23.223836 | orchestrator | changed: [testbed-node-2] 2025-05-29 00:58:23.223846 | orchestrator | 2025-05-29 00:58:23.223851 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph mgr daemon(s)] ******************** 2025-05-29 00:58:23.223857 | orchestrator | Thursday 29 May 2025 00:49:26 +0000 (0:00:01.633) 0:04:11.040 ********** 2025-05-29 00:58:23.223862 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-29 00:58:23.223868 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-29 00:58:23.223873 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-29 00:58:23.223878 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.223884 | orchestrator | 2025-05-29 00:58:23.223889 | orchestrator | RUNNING HANDLER [ceph-handler : set _mgr_handler_called after restart] ********* 2025-05-29 00:58:23.223895 | orchestrator | Thursday 29 May 2025 00:49:27 +0000 (0:00:00.729) 0:04:11.770 ********** 2025-05-29 00:58:23.223900 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:58:23.223905 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:58:23.223911 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:58:23.223916 | orchestrator | 2025-05-29 00:58:23.223922 | orchestrator | TASK [ceph-handler : set _mgr_handler_called after restart] ******************** 2025-05-29 00:58:23.223927 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.223933 | orchestrator | 2025-05-29 00:58:23.223938 | orchestrator | RUNNING HANDLER [ceph-handler : mdss handler] ********************************** 2025-05-29 00:58:23.223944 | orchestrator | Thursday 29 May 2025 00:49:28 +0000 (0:00:01.066) 0:04:12.836 ********** 2025-05-29 00:58:23.223949 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-29 00:58:23.223954 | orchestrator | 2025-05-29 00:58:23.223960 | orchestrator | RUNNING HANDLER [ceph-handler : rbd-target-api and rbd-target-gw handler] ****** 2025-05-29 00:58:23.223965 | orchestrator | Thursday 29 May 2025 00:49:28 +0000 (0:00:00.638) 0:04:13.475 ********** 2025-05-29 00:58:23.223970 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.223976 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.223981 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.223987 | orchestrator | 2025-05-29 00:58:23.223992 | orchestrator | TASK [ceph-handler : rbd-target-api and rbd-target-gw handler] ***************** 2025-05-29 00:58:23.223997 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:58:23.224003 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:58:23.224008 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:58:23.224014 | orchestrator | 2025-05-29 00:58:23.224019 | orchestrator | RUNNING HANDLER [ceph-handler : copy mds restart script] *********************** 2025-05-29 00:58:23.224025 | orchestrator | Thursday 29 May 2025 00:49:30 +0000 (0:00:01.209) 0:04:14.684 ********** 2025-05-29 00:58:23.224030 | orchestrator | changed: [testbed-node-3] 2025-05-29 00:58:23.224035 | orchestrator | changed: [testbed-node-4] 2025-05-29 00:58:23.224041 | orchestrator | changed: [testbed-node-5] 2025-05-29 00:58:23.224046 | orchestrator | 2025-05-29 00:58:23.224051 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-05-29 00:58:23.224057 | orchestrator | Thursday 29 May 2025 00:49:31 +0000 (0:00:01.240) 0:04:15.925 ********** 2025-05-29 00:58:23.224062 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:58:23.224105 | orchestrator | changed: [testbed-node-1] 2025-05-29 00:58:23.224113 | orchestrator | changed: [testbed-node-2] 2025-05-29 00:58:23.224118 | orchestrator | 2025-05-29 00:58:23.224124 | orchestrator | TASK [ceph-handler : remove tempdir for scripts] ******************************* 2025-05-29 00:58:23.224129 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-29 00:58:23.224135 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-29 00:58:23.224140 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-29 00:58:23.224161 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.224166 | orchestrator | 2025-05-29 00:58:23.224172 | orchestrator | RUNNING HANDLER [ceph-handler : set _mds_handler_called after restart] ********* 2025-05-29 00:58:23.224177 | orchestrator | Thursday 29 May 2025 00:49:32 +0000 (0:00:01.573) 0:04:17.498 ********** 2025-05-29 00:58:23.224183 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:58:23.224196 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:58:23.224201 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:58:23.224207 | orchestrator | 2025-05-29 00:58:23.224263 | orchestrator | RUNNING HANDLER [ceph-handler : rgws handler] ********************************** 2025-05-29 00:58:23.224273 | orchestrator | Thursday 29 May 2025 00:49:34 +0000 (0:00:01.547) 0:04:19.046 ********** 2025-05-29 00:58:23.224282 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-29 00:58:23.224287 | orchestrator | 2025-05-29 00:58:23.224293 | orchestrator | RUNNING HANDLER [ceph-handler : set _rgw_handler_called before restart] ******** 2025-05-29 00:58:23.224298 | orchestrator | Thursday 29 May 2025 00:49:35 +0000 (0:00:00.831) 0:04:19.878 ********** 2025-05-29 00:58:23.224304 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:58:23.224309 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:58:23.224314 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:58:23.224320 | orchestrator | 2025-05-29 00:58:23.224325 | orchestrator | RUNNING HANDLER [ceph-handler : copy rgw restart script] *********************** 2025-05-29 00:58:23.224331 | orchestrator | Thursday 29 May 2025 00:49:35 +0000 (0:00:00.571) 0:04:20.449 ********** 2025-05-29 00:58:23.224336 | orchestrator | changed: [testbed-node-3] 2025-05-29 00:58:23.224341 | orchestrator | changed: [testbed-node-4] 2025-05-29 00:58:23.224347 | orchestrator | changed: [testbed-node-5] 2025-05-29 00:58:23.224352 | orchestrator | 2025-05-29 00:58:23.224358 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph rgw daemon(s)] ******************** 2025-05-29 00:58:23.224367 | orchestrator | Thursday 29 May 2025 00:49:37 +0000 (0:00:02.118) 0:04:22.567 ********** 2025-05-29 00:58:23.224373 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-29 00:58:23.224378 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-29 00:58:23.224383 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-29 00:58:23.224389 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.224394 | orchestrator | 2025-05-29 00:58:23.224399 | orchestrator | RUNNING HANDLER [ceph-handler : set _rgw_handler_called after restart] ********* 2025-05-29 00:58:23.224405 | orchestrator | Thursday 29 May 2025 00:49:38 +0000 (0:00:00.500) 0:04:23.067 ********** 2025-05-29 00:58:23.224410 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:58:23.224416 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:58:23.224421 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:58:23.224427 | orchestrator | 2025-05-29 00:58:23.224432 | orchestrator | RUNNING HANDLER [ceph-handler : rbdmirrors handler] **************************** 2025-05-29 00:58:23.224437 | orchestrator | Thursday 29 May 2025 00:49:38 +0000 (0:00:00.307) 0:04:23.375 ********** 2025-05-29 00:58:23.224443 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.224448 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.224454 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.224459 | orchestrator | 2025-05-29 00:58:23.224465 | orchestrator | RUNNING HANDLER [ceph-handler : mgrs handler] ********************************** 2025-05-29 00:58:23.224470 | orchestrator | Thursday 29 May 2025 00:49:39 +0000 (0:00:00.315) 0:04:23.691 ********** 2025-05-29 00:58:23.224475 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.224481 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.224486 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.224492 | orchestrator | 2025-05-29 00:58:23.224497 | orchestrator | RUNNING HANDLER [ceph-handler : rbd-target-api and rbd-target-gw handler] ****** 2025-05-29 00:58:23.224503 | orchestrator | Thursday 29 May 2025 00:49:39 +0000 (0:00:00.437) 0:04:24.128 ********** 2025-05-29 00:58:23.224508 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.224513 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.224519 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.224524 | orchestrator | 2025-05-29 00:58:23.224529 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-05-29 00:58:23.224535 | orchestrator | Thursday 29 May 2025 00:49:39 +0000 (0:00:00.305) 0:04:24.434 ********** 2025-05-29 00:58:23.224540 | orchestrator | changed: [testbed-node-3] 2025-05-29 00:58:23.224550 | orchestrator | changed: [testbed-node-4] 2025-05-29 00:58:23.224556 | orchestrator | changed: [testbed-node-5] 2025-05-29 00:58:23.224561 | orchestrator | 2025-05-29 00:58:23.224567 | orchestrator | PLAY [Apply role ceph-mon] ***************************************************** 2025-05-29 00:58:23.224572 | orchestrator | 2025-05-29 00:58:23.224577 | orchestrator | TASK [ceph-handler : include check_running_containers.yml] ********************* 2025-05-29 00:58:23.224583 | orchestrator | Thursday 29 May 2025 00:49:41 +0000 (0:00:01.921) 0:04:26.356 ********** 2025-05-29 00:58:23.224588 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-29 00:58:23.224594 | orchestrator | 2025-05-29 00:58:23.224599 | orchestrator | TASK [ceph-handler : check for a mon container] ******************************** 2025-05-29 00:58:23.224605 | orchestrator | Thursday 29 May 2025 00:49:42 +0000 (0:00:00.630) 0:04:26.987 ********** 2025-05-29 00:58:23.224610 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:58:23.224616 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:58:23.224621 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:58:23.224627 | orchestrator | 2025-05-29 00:58:23.224632 | orchestrator | TASK [ceph-handler : check for an osd container] ******************************* 2025-05-29 00:58:23.224638 | orchestrator | Thursday 29 May 2025 00:49:43 +0000 (0:00:00.685) 0:04:27.672 ********** 2025-05-29 00:58:23.224643 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.224648 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.224697 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.224705 | orchestrator | 2025-05-29 00:58:23.224711 | orchestrator | TASK [ceph-handler : check for a mds container] ******************************** 2025-05-29 00:58:23.224716 | orchestrator | Thursday 29 May 2025 00:49:43 +0000 (0:00:00.314) 0:04:27.986 ********** 2025-05-29 00:58:23.224722 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.224727 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.224732 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.224738 | orchestrator | 2025-05-29 00:58:23.224743 | orchestrator | TASK [ceph-handler : check for a rgw container] ******************************** 2025-05-29 00:58:23.224749 | orchestrator | Thursday 29 May 2025 00:49:43 +0000 (0:00:00.572) 0:04:28.559 ********** 2025-05-29 00:58:23.224754 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.224760 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.224765 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.224771 | orchestrator | 2025-05-29 00:58:23.224776 | orchestrator | TASK [ceph-handler : check for a mgr container] ******************************** 2025-05-29 00:58:23.224782 | orchestrator | Thursday 29 May 2025 00:49:44 +0000 (0:00:00.353) 0:04:28.913 ********** 2025-05-29 00:58:23.224787 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:58:23.224792 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:58:23.224798 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:58:23.224803 | orchestrator | 2025-05-29 00:58:23.224809 | orchestrator | TASK [ceph-handler : check for a rbd mirror container] ************************* 2025-05-29 00:58:23.224814 | orchestrator | Thursday 29 May 2025 00:49:45 +0000 (0:00:00.813) 0:04:29.726 ********** 2025-05-29 00:58:23.224820 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.224825 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.224831 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.224836 | orchestrator | 2025-05-29 00:58:23.224842 | orchestrator | TASK [ceph-handler : check for a nfs container] ******************************** 2025-05-29 00:58:23.224847 | orchestrator | Thursday 29 May 2025 00:49:45 +0000 (0:00:00.312) 0:04:30.039 ********** 2025-05-29 00:58:23.224852 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.224858 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.224863 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.224869 | orchestrator | 2025-05-29 00:58:23.224874 | orchestrator | TASK [ceph-handler : check for a tcmu-runner container] ************************ 2025-05-29 00:58:23.224880 | orchestrator | Thursday 29 May 2025 00:49:45 +0000 (0:00:00.593) 0:04:30.633 ********** 2025-05-29 00:58:23.224893 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.224899 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.224904 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.224909 | orchestrator | 2025-05-29 00:58:23.224915 | orchestrator | TASK [ceph-handler : check for a rbd-target-api container] ********************* 2025-05-29 00:58:23.224920 | orchestrator | Thursday 29 May 2025 00:49:46 +0000 (0:00:00.348) 0:04:30.981 ********** 2025-05-29 00:58:23.224926 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.224931 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.224937 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.224942 | orchestrator | 2025-05-29 00:58:23.224948 | orchestrator | TASK [ceph-handler : check for a rbd-target-gw container] ********************** 2025-05-29 00:58:23.224953 | orchestrator | Thursday 29 May 2025 00:49:46 +0000 (0:00:00.362) 0:04:31.344 ********** 2025-05-29 00:58:23.224958 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.224964 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.224969 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.224975 | orchestrator | 2025-05-29 00:58:23.224980 | orchestrator | TASK [ceph-handler : check for a ceph-crash container] ************************* 2025-05-29 00:58:23.224985 | orchestrator | Thursday 29 May 2025 00:49:47 +0000 (0:00:00.312) 0:04:31.656 ********** 2025-05-29 00:58:23.224991 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:58:23.224996 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:58:23.225002 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:58:23.225007 | orchestrator | 2025-05-29 00:58:23.225013 | orchestrator | TASK [ceph-handler : include check_socket_non_container.yml] ******************* 2025-05-29 00:58:23.225018 | orchestrator | Thursday 29 May 2025 00:49:48 +0000 (0:00:01.106) 0:04:32.763 ********** 2025-05-29 00:58:23.225024 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.225029 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.225034 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.225040 | orchestrator | 2025-05-29 00:58:23.225045 | orchestrator | TASK [ceph-handler : set_fact handler_mon_status] ****************************** 2025-05-29 00:58:23.225051 | orchestrator | Thursday 29 May 2025 00:49:48 +0000 (0:00:00.323) 0:04:33.087 ********** 2025-05-29 00:58:23.225056 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:58:23.225061 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:58:23.225067 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:58:23.225072 | orchestrator | 2025-05-29 00:58:23.225078 | orchestrator | TASK [ceph-handler : set_fact handler_osd_status] ****************************** 2025-05-29 00:58:23.225083 | orchestrator | Thursday 29 May 2025 00:49:48 +0000 (0:00:00.388) 0:04:33.475 ********** 2025-05-29 00:58:23.225088 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.225094 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.225099 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.225116 | orchestrator | 2025-05-29 00:58:23.225122 | orchestrator | TASK [ceph-handler : set_fact handler_mds_status] ****************************** 2025-05-29 00:58:23.225128 | orchestrator | Thursday 29 May 2025 00:49:49 +0000 (0:00:00.323) 0:04:33.798 ********** 2025-05-29 00:58:23.225133 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.225138 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.225144 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.225149 | orchestrator | 2025-05-29 00:58:23.225154 | orchestrator | TASK [ceph-handler : set_fact handler_rgw_status] ****************************** 2025-05-29 00:58:23.225160 | orchestrator | Thursday 29 May 2025 00:49:49 +0000 (0:00:00.729) 0:04:34.528 ********** 2025-05-29 00:58:23.225165 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.225171 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.225176 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.225181 | orchestrator | 2025-05-29 00:58:23.225187 | orchestrator | TASK [ceph-handler : set_fact handler_nfs_status] ****************************** 2025-05-29 00:58:23.225192 | orchestrator | Thursday 29 May 2025 00:49:50 +0000 (0:00:00.340) 0:04:34.868 ********** 2025-05-29 00:58:23.225198 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.225207 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.225296 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.225305 | orchestrator | 2025-05-29 00:58:23.225311 | orchestrator | TASK [ceph-handler : set_fact handler_rbd_status] ****************************** 2025-05-29 00:58:23.225316 | orchestrator | Thursday 29 May 2025 00:49:50 +0000 (0:00:00.465) 0:04:35.336 ********** 2025-05-29 00:58:23.225322 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.225327 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.225332 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.225338 | orchestrator | 2025-05-29 00:58:23.225343 | orchestrator | TASK [ceph-handler : set_fact handler_mgr_status] ****************************** 2025-05-29 00:58:23.225348 | orchestrator | Thursday 29 May 2025 00:49:51 +0000 (0:00:00.356) 0:04:35.693 ********** 2025-05-29 00:58:23.225354 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:58:23.225359 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:58:23.225365 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:58:23.225370 | orchestrator | 2025-05-29 00:58:23.225375 | orchestrator | TASK [ceph-handler : set_fact handler_crash_status] **************************** 2025-05-29 00:58:23.225381 | orchestrator | Thursday 29 May 2025 00:49:51 +0000 (0:00:00.476) 0:04:36.169 ********** 2025-05-29 00:58:23.225386 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:58:23.225391 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:58:23.225397 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:58:23.225402 | orchestrator | 2025-05-29 00:58:23.225408 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-05-29 00:58:23.225413 | orchestrator | Thursday 29 May 2025 00:49:51 +0000 (0:00:00.330) 0:04:36.500 ********** 2025-05-29 00:58:23.225419 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.225424 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.225429 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.225435 | orchestrator | 2025-05-29 00:58:23.225440 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-05-29 00:58:23.225445 | orchestrator | Thursday 29 May 2025 00:49:52 +0000 (0:00:00.290) 0:04:36.790 ********** 2025-05-29 00:58:23.225451 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.225456 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.225462 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.225467 | orchestrator | 2025-05-29 00:58:23.225473 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-05-29 00:58:23.225478 | orchestrator | Thursday 29 May 2025 00:49:52 +0000 (0:00:00.298) 0:04:37.088 ********** 2025-05-29 00:58:23.225487 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.225493 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.225498 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.225504 | orchestrator | 2025-05-29 00:58:23.225509 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-05-29 00:58:23.225515 | orchestrator | Thursday 29 May 2025 00:49:52 +0000 (0:00:00.479) 0:04:37.568 ********** 2025-05-29 00:58:23.225520 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.225526 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.225533 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.225545 | orchestrator | 2025-05-29 00:58:23.225558 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-05-29 00:58:23.225567 | orchestrator | Thursday 29 May 2025 00:49:53 +0000 (0:00:00.341) 0:04:37.910 ********** 2025-05-29 00:58:23.225576 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.225585 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.225594 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.225602 | orchestrator | 2025-05-29 00:58:23.225612 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-05-29 00:58:23.225622 | orchestrator | Thursday 29 May 2025 00:49:53 +0000 (0:00:00.484) 0:04:38.395 ********** 2025-05-29 00:58:23.225631 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.225641 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.225647 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.225658 | orchestrator | 2025-05-29 00:58:23.225663 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-05-29 00:58:23.225669 | orchestrator | Thursday 29 May 2025 00:49:54 +0000 (0:00:00.347) 0:04:38.742 ********** 2025-05-29 00:58:23.225674 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.225679 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.225685 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.225690 | orchestrator | 2025-05-29 00:58:23.225695 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-05-29 00:58:23.225701 | orchestrator | Thursday 29 May 2025 00:49:54 +0000 (0:00:00.477) 0:04:39.220 ********** 2025-05-29 00:58:23.225706 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.225712 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.225717 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.225722 | orchestrator | 2025-05-29 00:58:23.225728 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-05-29 00:58:23.225733 | orchestrator | Thursday 29 May 2025 00:49:54 +0000 (0:00:00.326) 0:04:39.546 ********** 2025-05-29 00:58:23.225739 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.225744 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.225749 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.225755 | orchestrator | 2025-05-29 00:58:23.225760 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-05-29 00:58:23.225766 | orchestrator | Thursday 29 May 2025 00:49:55 +0000 (0:00:00.409) 0:04:39.956 ********** 2025-05-29 00:58:23.225771 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.225776 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.225781 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.225786 | orchestrator | 2025-05-29 00:58:23.225791 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-05-29 00:58:23.225795 | orchestrator | Thursday 29 May 2025 00:49:55 +0000 (0:00:00.411) 0:04:40.367 ********** 2025-05-29 00:58:23.225800 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.225805 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.225810 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.225814 | orchestrator | 2025-05-29 00:58:23.225819 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-05-29 00:58:23.225866 | orchestrator | Thursday 29 May 2025 00:49:56 +0000 (0:00:00.532) 0:04:40.900 ********** 2025-05-29 00:58:23.225873 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.225878 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.225882 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.225887 | orchestrator | 2025-05-29 00:58:23.225892 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-05-29 00:58:23.225897 | orchestrator | Thursday 29 May 2025 00:49:56 +0000 (0:00:00.313) 0:04:41.213 ********** 2025-05-29 00:58:23.225902 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-05-29 00:58:23.225907 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-05-29 00:58:23.225912 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-05-29 00:58:23.225916 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-05-29 00:58:23.225921 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.225926 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.225931 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-05-29 00:58:23.225935 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-05-29 00:58:23.225940 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.225945 | orchestrator | 2025-05-29 00:58:23.225950 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-05-29 00:58:23.225954 | orchestrator | Thursday 29 May 2025 00:49:56 +0000 (0:00:00.341) 0:04:41.555 ********** 2025-05-29 00:58:23.225959 | orchestrator | skipping: [testbed-node-0] => (item=osd memory target)  2025-05-29 00:58:23.225968 | orchestrator | skipping: [testbed-node-0] => (item=osd_memory_target)  2025-05-29 00:58:23.225973 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.225978 | orchestrator | skipping: [testbed-node-1] => (item=osd memory target)  2025-05-29 00:58:23.225983 | orchestrator | skipping: [testbed-node-1] => (item=osd_memory_target)  2025-05-29 00:58:23.225988 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.225993 | orchestrator | skipping: [testbed-node-2] => (item=osd memory target)  2025-05-29 00:58:23.225998 | orchestrator | skipping: [testbed-node-2] => (item=osd_memory_target)  2025-05-29 00:58:23.226003 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.226008 | orchestrator | 2025-05-29 00:58:23.226012 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-05-29 00:58:23.226041 | orchestrator | Thursday 29 May 2025 00:49:57 +0000 (0:00:00.348) 0:04:41.903 ********** 2025-05-29 00:58:23.226046 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.226051 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.226056 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.226062 | orchestrator | 2025-05-29 00:58:23.226070 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-05-29 00:58:23.226082 | orchestrator | Thursday 29 May 2025 00:49:57 +0000 (0:00:00.458) 0:04:42.362 ********** 2025-05-29 00:58:23.226091 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.226098 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.226106 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.226114 | orchestrator | 2025-05-29 00:58:23.226121 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-05-29 00:58:23.226129 | orchestrator | Thursday 29 May 2025 00:49:57 +0000 (0:00:00.267) 0:04:42.629 ********** 2025-05-29 00:58:23.226137 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.226145 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.226154 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.226163 | orchestrator | 2025-05-29 00:58:23.226171 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-05-29 00:58:23.226180 | orchestrator | Thursday 29 May 2025 00:49:58 +0000 (0:00:00.252) 0:04:42.882 ********** 2025-05-29 00:58:23.226185 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.226190 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.226194 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.226199 | orchestrator | 2025-05-29 00:58:23.226204 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-05-29 00:58:23.226209 | orchestrator | Thursday 29 May 2025 00:49:58 +0000 (0:00:00.260) 0:04:43.142 ********** 2025-05-29 00:58:23.226233 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.226237 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.226242 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.226247 | orchestrator | 2025-05-29 00:58:23.226252 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-05-29 00:58:23.226270 | orchestrator | Thursday 29 May 2025 00:49:58 +0000 (0:00:00.440) 0:04:43.583 ********** 2025-05-29 00:58:23.226275 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.226280 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.226284 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.226289 | orchestrator | 2025-05-29 00:58:23.226294 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-05-29 00:58:23.226299 | orchestrator | Thursday 29 May 2025 00:49:59 +0000 (0:00:00.290) 0:04:43.874 ********** 2025-05-29 00:58:23.226304 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-29 00:58:23.226309 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-29 00:58:23.226313 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-29 00:58:23.226318 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.226323 | orchestrator | 2025-05-29 00:58:23.226333 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-05-29 00:58:23.226338 | orchestrator | Thursday 29 May 2025 00:49:59 +0000 (0:00:00.374) 0:04:44.248 ********** 2025-05-29 00:58:23.226343 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-29 00:58:23.226348 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-29 00:58:23.226352 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-29 00:58:23.226357 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.226362 | orchestrator | 2025-05-29 00:58:23.226367 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-05-29 00:58:23.226372 | orchestrator | Thursday 29 May 2025 00:50:00 +0000 (0:00:00.400) 0:04:44.649 ********** 2025-05-29 00:58:23.226426 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-29 00:58:23.226433 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-29 00:58:23.226438 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-29 00:58:23.226442 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.226447 | orchestrator | 2025-05-29 00:58:23.226452 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-29 00:58:23.226457 | orchestrator | Thursday 29 May 2025 00:50:00 +0000 (0:00:00.381) 0:04:45.030 ********** 2025-05-29 00:58:23.226462 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.226467 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.226472 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.226477 | orchestrator | 2025-05-29 00:58:23.226482 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-05-29 00:58:23.226486 | orchestrator | Thursday 29 May 2025 00:50:00 +0000 (0:00:00.461) 0:04:45.492 ********** 2025-05-29 00:58:23.226491 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-05-29 00:58:23.226496 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.226501 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-05-29 00:58:23.226506 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.226511 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-05-29 00:58:23.226515 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.226520 | orchestrator | 2025-05-29 00:58:23.226525 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-05-29 00:58:23.226530 | orchestrator | Thursday 29 May 2025 00:50:01 +0000 (0:00:00.401) 0:04:45.894 ********** 2025-05-29 00:58:23.226535 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.226539 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.226544 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.226549 | orchestrator | 2025-05-29 00:58:23.226554 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-29 00:58:23.226558 | orchestrator | Thursday 29 May 2025 00:50:01 +0000 (0:00:00.273) 0:04:46.167 ********** 2025-05-29 00:58:23.226563 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.226568 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.226573 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.226578 | orchestrator | 2025-05-29 00:58:23.226582 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-05-29 00:58:23.226591 | orchestrator | Thursday 29 May 2025 00:50:01 +0000 (0:00:00.313) 0:04:46.481 ********** 2025-05-29 00:58:23.226596 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-05-29 00:58:23.226600 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.226605 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-05-29 00:58:23.226610 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.226615 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-05-29 00:58:23.226619 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.226624 | orchestrator | 2025-05-29 00:58:23.226629 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-05-29 00:58:23.226634 | orchestrator | Thursday 29 May 2025 00:50:02 +0000 (0:00:00.777) 0:04:47.259 ********** 2025-05-29 00:58:23.226645 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.226650 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.226655 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.226659 | orchestrator | 2025-05-29 00:58:23.226664 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-05-29 00:58:23.226669 | orchestrator | Thursday 29 May 2025 00:50:02 +0000 (0:00:00.283) 0:04:47.543 ********** 2025-05-29 00:58:23.226674 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-29 00:58:23.226679 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-29 00:58:23.226683 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-29 00:58:23.226688 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.226693 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-05-29 00:58:23.226698 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-05-29 00:58:23.226703 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-05-29 00:58:23.226707 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.226712 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-05-29 00:58:23.226717 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-05-29 00:58:23.226722 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-05-29 00:58:23.226727 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.226731 | orchestrator | 2025-05-29 00:58:23.226736 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-05-29 00:58:23.226741 | orchestrator | Thursday 29 May 2025 00:50:03 +0000 (0:00:00.492) 0:04:48.035 ********** 2025-05-29 00:58:23.226746 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.226751 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.226756 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.226760 | orchestrator | 2025-05-29 00:58:23.226765 | orchestrator | TASK [ceph-rgw : create rgw keyrings] ****************************************** 2025-05-29 00:58:23.226770 | orchestrator | Thursday 29 May 2025 00:50:04 +0000 (0:00:00.659) 0:04:48.695 ********** 2025-05-29 00:58:23.226775 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.226779 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.226784 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.226789 | orchestrator | 2025-05-29 00:58:23.226794 | orchestrator | TASK [ceph-rgw : include_tasks multisite] ************************************** 2025-05-29 00:58:23.226799 | orchestrator | Thursday 29 May 2025 00:50:04 +0000 (0:00:00.525) 0:04:49.220 ********** 2025-05-29 00:58:23.226803 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.226808 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.226813 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.226818 | orchestrator | 2025-05-29 00:58:23.226823 | orchestrator | TASK [ceph-handler : set_fact multisite_called_from_handler_role] ************** 2025-05-29 00:58:23.226827 | orchestrator | Thursday 29 May 2025 00:50:05 +0000 (0:00:00.681) 0:04:49.902 ********** 2025-05-29 00:58:23.226832 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.226837 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.226857 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.226863 | orchestrator | 2025-05-29 00:58:23.226868 | orchestrator | TASK [ceph-mon : set_fact container_exec_cmd] ********************************** 2025-05-29 00:58:23.226873 | orchestrator | Thursday 29 May 2025 00:50:05 +0000 (0:00:00.509) 0:04:50.411 ********** 2025-05-29 00:58:23.226877 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:58:23.226882 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:58:23.226887 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:58:23.226892 | orchestrator | 2025-05-29 00:58:23.226897 | orchestrator | TASK [ceph-mon : include deploy_monitors.yml] ********************************** 2025-05-29 00:58:23.226902 | orchestrator | Thursday 29 May 2025 00:50:06 +0000 (0:00:00.472) 0:04:50.883 ********** 2025-05-29 00:58:23.226906 | orchestrator | included: /ansible/roles/ceph-mon/tasks/deploy_monitors.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-29 00:58:23.226915 | orchestrator | 2025-05-29 00:58:23.226920 | orchestrator | TASK [ceph-mon : check if monitor initial keyring already exists] ************** 2025-05-29 00:58:23.226925 | orchestrator | Thursday 29 May 2025 00:50:06 +0000 (0:00:00.584) 0:04:51.468 ********** 2025-05-29 00:58:23.226930 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.226934 | orchestrator | 2025-05-29 00:58:23.226939 | orchestrator | TASK [ceph-mon : generate monitor initial keyring] ***************************** 2025-05-29 00:58:23.226944 | orchestrator | Thursday 29 May 2025 00:50:06 +0000 (0:00:00.139) 0:04:51.607 ********** 2025-05-29 00:58:23.226949 | orchestrator | changed: [testbed-node-0 -> localhost] 2025-05-29 00:58:23.226954 | orchestrator | 2025-05-29 00:58:23.226959 | orchestrator | TASK [ceph-mon : set_fact _initial_mon_key_success] **************************** 2025-05-29 00:58:23.226963 | orchestrator | Thursday 29 May 2025 00:50:07 +0000 (0:00:00.665) 0:04:52.273 ********** 2025-05-29 00:58:23.226968 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:58:23.226973 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:58:23.226978 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:58:23.226983 | orchestrator | 2025-05-29 00:58:23.226988 | orchestrator | TASK [ceph-mon : get initial keyring when it already exists] ******************* 2025-05-29 00:58:23.226992 | orchestrator | Thursday 29 May 2025 00:50:08 +0000 (0:00:00.476) 0:04:52.750 ********** 2025-05-29 00:58:23.226997 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:58:23.227002 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:58:23.227007 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:58:23.227012 | orchestrator | 2025-05-29 00:58:23.227020 | orchestrator | TASK [ceph-mon : create monitor initial keyring] ******************************* 2025-05-29 00:58:23.227025 | orchestrator | Thursday 29 May 2025 00:50:08 +0000 (0:00:00.381) 0:04:53.131 ********** 2025-05-29 00:58:23.227030 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:58:23.227034 | orchestrator | changed: [testbed-node-1] 2025-05-29 00:58:23.227039 | orchestrator | changed: [testbed-node-2] 2025-05-29 00:58:23.227044 | orchestrator | 2025-05-29 00:58:23.227049 | orchestrator | TASK [ceph-mon : copy the initial key in /etc/ceph (for containers)] *********** 2025-05-29 00:58:23.227054 | orchestrator | Thursday 29 May 2025 00:50:09 +0000 (0:00:01.199) 0:04:54.331 ********** 2025-05-29 00:58:23.227060 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:58:23.227065 | orchestrator | changed: [testbed-node-1] 2025-05-29 00:58:23.227071 | orchestrator | changed: [testbed-node-2] 2025-05-29 00:58:23.227077 | orchestrator | 2025-05-29 00:58:23.227082 | orchestrator | TASK [ceph-mon : create monitor directory] ************************************* 2025-05-29 00:58:23.227088 | orchestrator | Thursday 29 May 2025 00:50:10 +0000 (0:00:00.792) 0:04:55.123 ********** 2025-05-29 00:58:23.227093 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:58:23.227099 | orchestrator | changed: [testbed-node-1] 2025-05-29 00:58:23.227105 | orchestrator | changed: [testbed-node-2] 2025-05-29 00:58:23.227111 | orchestrator | 2025-05-29 00:58:23.227116 | orchestrator | TASK [ceph-mon : recursively fix ownership of monitor directory] *************** 2025-05-29 00:58:23.227122 | orchestrator | Thursday 29 May 2025 00:50:11 +0000 (0:00:01.065) 0:04:56.189 ********** 2025-05-29 00:58:23.227127 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:58:23.227133 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:58:23.227138 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:58:23.227144 | orchestrator | 2025-05-29 00:58:23.227150 | orchestrator | TASK [ceph-mon : create custom admin keyring] ********************************** 2025-05-29 00:58:23.227155 | orchestrator | Thursday 29 May 2025 00:50:12 +0000 (0:00:00.672) 0:04:56.862 ********** 2025-05-29 00:58:23.227161 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.227166 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.227172 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.227177 | orchestrator | 2025-05-29 00:58:23.227185 | orchestrator | TASK [ceph-mon : set_fact ceph-authtool container command] ********************* 2025-05-29 00:58:23.227194 | orchestrator | Thursday 29 May 2025 00:50:12 +0000 (0:00:00.363) 0:04:57.225 ********** 2025-05-29 00:58:23.227203 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:58:23.227256 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:58:23.227266 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:58:23.227274 | orchestrator | 2025-05-29 00:58:23.227282 | orchestrator | TASK [ceph-mon : import admin keyring into mon keyring] ************************ 2025-05-29 00:58:23.227291 | orchestrator | Thursday 29 May 2025 00:50:12 +0000 (0:00:00.335) 0:04:57.561 ********** 2025-05-29 00:58:23.227300 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.227309 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.227317 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.227325 | orchestrator | 2025-05-29 00:58:23.227334 | orchestrator | TASK [ceph-mon : set_fact ceph-mon container command] ************************** 2025-05-29 00:58:23.227343 | orchestrator | Thursday 29 May 2025 00:50:13 +0000 (0:00:00.550) 0:04:58.111 ********** 2025-05-29 00:58:23.227351 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:58:23.227360 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:58:23.227369 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:58:23.227378 | orchestrator | 2025-05-29 00:58:23.227386 | orchestrator | TASK [ceph-mon : ceph monitor mkfs with keyring] ******************************* 2025-05-29 00:58:23.227395 | orchestrator | Thursday 29 May 2025 00:50:13 +0000 (0:00:00.317) 0:04:58.428 ********** 2025-05-29 00:58:23.227403 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:58:23.227411 | orchestrator | changed: [testbed-node-1] 2025-05-29 00:58:23.227418 | orchestrator | changed: [testbed-node-2] 2025-05-29 00:58:23.227425 | orchestrator | 2025-05-29 00:58:23.227433 | orchestrator | TASK [ceph-mon : ceph monitor mkfs without keyring] **************************** 2025-05-29 00:58:23.227469 | orchestrator | Thursday 29 May 2025 00:50:15 +0000 (0:00:01.336) 0:04:59.764 ********** 2025-05-29 00:58:23.227477 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.227484 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.227492 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.227499 | orchestrator | 2025-05-29 00:58:23.227504 | orchestrator | TASK [ceph-mon : include start_monitor.yml] ************************************ 2025-05-29 00:58:23.227509 | orchestrator | Thursday 29 May 2025 00:50:15 +0000 (0:00:00.626) 0:05:00.391 ********** 2025-05-29 00:58:23.227514 | orchestrator | included: /ansible/roles/ceph-mon/tasks/start_monitor.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-29 00:58:23.227519 | orchestrator | 2025-05-29 00:58:23.227524 | orchestrator | TASK [ceph-mon : ensure systemd service override directory exists] ************* 2025-05-29 00:58:23.227528 | orchestrator | Thursday 29 May 2025 00:50:16 +0000 (0:00:00.559) 0:05:00.950 ********** 2025-05-29 00:58:23.227533 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.227538 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.227543 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.227548 | orchestrator | 2025-05-29 00:58:23.227552 | orchestrator | TASK [ceph-mon : add ceph-mon systemd service overrides] *********************** 2025-05-29 00:58:23.227557 | orchestrator | Thursday 29 May 2025 00:50:16 +0000 (0:00:00.339) 0:05:01.289 ********** 2025-05-29 00:58:23.227562 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.227567 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.227572 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.227576 | orchestrator | 2025-05-29 00:58:23.227582 | orchestrator | TASK [ceph-mon : include_tasks systemd.yml] ************************************ 2025-05-29 00:58:23.227586 | orchestrator | Thursday 29 May 2025 00:50:17 +0000 (0:00:00.561) 0:05:01.851 ********** 2025-05-29 00:58:23.227591 | orchestrator | included: /ansible/roles/ceph-mon/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-29 00:58:23.227596 | orchestrator | 2025-05-29 00:58:23.227600 | orchestrator | TASK [ceph-mon : generate systemd unit file for mon container] ***************** 2025-05-29 00:58:23.227605 | orchestrator | Thursday 29 May 2025 00:50:17 +0000 (0:00:00.629) 0:05:02.481 ********** 2025-05-29 00:58:23.227609 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:58:23.227614 | orchestrator | changed: [testbed-node-1] 2025-05-29 00:58:23.227618 | orchestrator | changed: [testbed-node-2] 2025-05-29 00:58:23.227628 | orchestrator | 2025-05-29 00:58:23.227636 | orchestrator | TASK [ceph-mon : generate systemd ceph-mon target file] ************************ 2025-05-29 00:58:23.227641 | orchestrator | Thursday 29 May 2025 00:50:19 +0000 (0:00:01.235) 0:05:03.716 ********** 2025-05-29 00:58:23.227646 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:58:23.227650 | orchestrator | changed: [testbed-node-1] 2025-05-29 00:58:23.227655 | orchestrator | changed: [testbed-node-2] 2025-05-29 00:58:23.227659 | orchestrator | 2025-05-29 00:58:23.227664 | orchestrator | TASK [ceph-mon : enable ceph-mon.target] *************************************** 2025-05-29 00:58:23.227668 | orchestrator | Thursday 29 May 2025 00:50:20 +0000 (0:00:01.464) 0:05:05.180 ********** 2025-05-29 00:58:23.227673 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:58:23.227677 | orchestrator | changed: [testbed-node-1] 2025-05-29 00:58:23.227682 | orchestrator | changed: [testbed-node-2] 2025-05-29 00:58:23.227686 | orchestrator | 2025-05-29 00:58:23.227691 | orchestrator | TASK [ceph-mon : start the monitor service] ************************************ 2025-05-29 00:58:23.227696 | orchestrator | Thursday 29 May 2025 00:50:22 +0000 (0:00:01.762) 0:05:06.943 ********** 2025-05-29 00:58:23.227700 | orchestrator | changed: [testbed-node-1] 2025-05-29 00:58:23.227705 | orchestrator | changed: [testbed-node-2] 2025-05-29 00:58:23.227709 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:58:23.227714 | orchestrator | 2025-05-29 00:58:23.227718 | orchestrator | TASK [ceph-mon : include_tasks ceph_keys.yml] ********************************** 2025-05-29 00:58:23.227723 | orchestrator | Thursday 29 May 2025 00:50:24 +0000 (0:00:02.075) 0:05:09.018 ********** 2025-05-29 00:58:23.227728 | orchestrator | included: /ansible/roles/ceph-mon/tasks/ceph_keys.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-29 00:58:23.227732 | orchestrator | 2025-05-29 00:58:23.227737 | orchestrator | TASK [ceph-mon : waiting for the monitor(s) to form the quorum...] ************* 2025-05-29 00:58:23.227741 | orchestrator | Thursday 29 May 2025 00:50:25 +0000 (0:00:00.927) 0:05:09.946 ********** 2025-05-29 00:58:23.227746 | orchestrator | FAILED - RETRYING: [testbed-node-0]: waiting for the monitor(s) to form the quorum... (10 retries left). 2025-05-29 00:58:23.227751 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:58:23.227755 | orchestrator | 2025-05-29 00:58:23.227760 | orchestrator | TASK [ceph-mon : fetch ceph initial keys] ************************************** 2025-05-29 00:58:23.227764 | orchestrator | Thursday 29 May 2025 00:50:46 +0000 (0:00:21.411) 0:05:31.358 ********** 2025-05-29 00:58:23.227769 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:58:23.227773 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:58:23.227778 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:58:23.227782 | orchestrator | 2025-05-29 00:58:23.227787 | orchestrator | TASK [ceph-mon : include secure_cluster.yml] *********************************** 2025-05-29 00:58:23.227792 | orchestrator | Thursday 29 May 2025 00:50:54 +0000 (0:00:07.554) 0:05:38.912 ********** 2025-05-29 00:58:23.227796 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.227801 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.227805 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.227810 | orchestrator | 2025-05-29 00:58:23.227814 | orchestrator | RUNNING HANDLER [ceph-handler : make tempdir for scripts] ********************** 2025-05-29 00:58:23.227819 | orchestrator | Thursday 29 May 2025 00:50:55 +0000 (0:00:01.197) 0:05:40.109 ********** 2025-05-29 00:58:23.227823 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:58:23.227828 | orchestrator | changed: [testbed-node-1] 2025-05-29 00:58:23.227832 | orchestrator | changed: [testbed-node-2] 2025-05-29 00:58:23.227837 | orchestrator | 2025-05-29 00:58:23.227842 | orchestrator | RUNNING HANDLER [ceph-handler : mons handler] ********************************** 2025-05-29 00:58:23.227846 | orchestrator | Thursday 29 May 2025 00:50:56 +0000 (0:00:00.749) 0:05:40.859 ********** 2025-05-29 00:58:23.227851 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mons.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-29 00:58:23.227855 | orchestrator | 2025-05-29 00:58:23.227860 | orchestrator | RUNNING HANDLER [ceph-handler : set _mon_handler_called before restart] ******** 2025-05-29 00:58:23.227879 | orchestrator | Thursday 29 May 2025 00:50:57 +0000 (0:00:00.890) 0:05:41.750 ********** 2025-05-29 00:58:23.227888 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:58:23.227893 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:58:23.227897 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:58:23.227902 | orchestrator | 2025-05-29 00:58:23.227906 | orchestrator | RUNNING HANDLER [ceph-handler : copy mon restart script] *********************** 2025-05-29 00:58:23.227911 | orchestrator | Thursday 29 May 2025 00:50:57 +0000 (0:00:00.400) 0:05:42.151 ********** 2025-05-29 00:58:23.227915 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:58:23.227920 | orchestrator | changed: [testbed-node-1] 2025-05-29 00:58:23.227924 | orchestrator | changed: [testbed-node-2] 2025-05-29 00:58:23.227929 | orchestrator | 2025-05-29 00:58:23.227933 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph mon daemon(s)] ******************** 2025-05-29 00:58:23.227938 | orchestrator | Thursday 29 May 2025 00:50:58 +0000 (0:00:01.285) 0:05:43.437 ********** 2025-05-29 00:58:23.227942 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-29 00:58:23.227947 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-29 00:58:23.227951 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-29 00:58:23.227956 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.227960 | orchestrator | 2025-05-29 00:58:23.227965 | orchestrator | RUNNING HANDLER [ceph-handler : set _mon_handler_called after restart] ********* 2025-05-29 00:58:23.227969 | orchestrator | Thursday 29 May 2025 00:50:59 +0000 (0:00:01.045) 0:05:44.482 ********** 2025-05-29 00:58:23.227974 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:58:23.227978 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:58:23.227983 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:58:23.227987 | orchestrator | 2025-05-29 00:58:23.227992 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-05-29 00:58:23.227996 | orchestrator | Thursday 29 May 2025 00:51:00 +0000 (0:00:00.365) 0:05:44.847 ********** 2025-05-29 00:58:23.228001 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:58:23.228005 | orchestrator | changed: [testbed-node-1] 2025-05-29 00:58:23.228010 | orchestrator | changed: [testbed-node-2] 2025-05-29 00:58:23.228014 | orchestrator | 2025-05-29 00:58:23.228019 | orchestrator | PLAY [Apply role ceph-mgr] ***************************************************** 2025-05-29 00:58:23.228023 | orchestrator | 2025-05-29 00:58:23.228028 | orchestrator | TASK [ceph-handler : include check_running_containers.yml] ********************* 2025-05-29 00:58:23.228035 | orchestrator | Thursday 29 May 2025 00:51:02 +0000 (0:00:02.155) 0:05:47.002 ********** 2025-05-29 00:58:23.228040 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-29 00:58:23.228044 | orchestrator | 2025-05-29 00:58:23.228049 | orchestrator | TASK [ceph-handler : check for a mon container] ******************************** 2025-05-29 00:58:23.228053 | orchestrator | Thursday 29 May 2025 00:51:03 +0000 (0:00:00.817) 0:05:47.819 ********** 2025-05-29 00:58:23.228058 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:58:23.228062 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:58:23.228067 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:58:23.228071 | orchestrator | 2025-05-29 00:58:23.228076 | orchestrator | TASK [ceph-handler : check for an osd container] ******************************* 2025-05-29 00:58:23.228080 | orchestrator | Thursday 29 May 2025 00:51:03 +0000 (0:00:00.737) 0:05:48.557 ********** 2025-05-29 00:58:23.228085 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.228089 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.228094 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.228098 | orchestrator | 2025-05-29 00:58:23.228103 | orchestrator | TASK [ceph-handler : check for a mds container] ******************************** 2025-05-29 00:58:23.228107 | orchestrator | Thursday 29 May 2025 00:51:04 +0000 (0:00:00.361) 0:05:48.918 ********** 2025-05-29 00:58:23.228112 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.228116 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.228121 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.228125 | orchestrator | 2025-05-29 00:58:23.228133 | orchestrator | TASK [ceph-handler : check for a rgw container] ******************************** 2025-05-29 00:58:23.228138 | orchestrator | Thursday 29 May 2025 00:51:05 +0000 (0:00:00.726) 0:05:49.645 ********** 2025-05-29 00:58:23.228142 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.228147 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.228151 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.228156 | orchestrator | 2025-05-29 00:58:23.228160 | orchestrator | TASK [ceph-handler : check for a mgr container] ******************************** 2025-05-29 00:58:23.228165 | orchestrator | Thursday 29 May 2025 00:51:05 +0000 (0:00:00.370) 0:05:50.015 ********** 2025-05-29 00:58:23.228169 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:58:23.228174 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:58:23.228178 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:58:23.228183 | orchestrator | 2025-05-29 00:58:23.228187 | orchestrator | TASK [ceph-handler : check for a rbd mirror container] ************************* 2025-05-29 00:58:23.228192 | orchestrator | Thursday 29 May 2025 00:51:06 +0000 (0:00:00.752) 0:05:50.767 ********** 2025-05-29 00:58:23.228196 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.228201 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.228205 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.228209 | orchestrator | 2025-05-29 00:58:23.228231 | orchestrator | TASK [ceph-handler : check for a nfs container] ******************************** 2025-05-29 00:58:23.228236 | orchestrator | Thursday 29 May 2025 00:51:06 +0000 (0:00:00.342) 0:05:51.110 ********** 2025-05-29 00:58:23.228240 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.228245 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.228249 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.228254 | orchestrator | 2025-05-29 00:58:23.228258 | orchestrator | TASK [ceph-handler : check for a tcmu-runner container] ************************ 2025-05-29 00:58:23.228263 | orchestrator | Thursday 29 May 2025 00:51:07 +0000 (0:00:00.619) 0:05:51.729 ********** 2025-05-29 00:58:23.228267 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.228272 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.228276 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.228281 | orchestrator | 2025-05-29 00:58:23.228286 | orchestrator | TASK [ceph-handler : check for a rbd-target-api container] ********************* 2025-05-29 00:58:23.228304 | orchestrator | Thursday 29 May 2025 00:51:07 +0000 (0:00:00.360) 0:05:52.089 ********** 2025-05-29 00:58:23.228309 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.228314 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.228319 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.228323 | orchestrator | 2025-05-29 00:58:23.228328 | orchestrator | TASK [ceph-handler : check for a rbd-target-gw container] ********************** 2025-05-29 00:58:23.228332 | orchestrator | Thursday 29 May 2025 00:51:07 +0000 (0:00:00.383) 0:05:52.473 ********** 2025-05-29 00:58:23.228337 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.228341 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.228346 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.228350 | orchestrator | 2025-05-29 00:58:23.228355 | orchestrator | TASK [ceph-handler : check for a ceph-crash container] ************************* 2025-05-29 00:58:23.228359 | orchestrator | Thursday 29 May 2025 00:51:08 +0000 (0:00:00.345) 0:05:52.818 ********** 2025-05-29 00:58:23.228364 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:58:23.228368 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:58:23.228373 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:58:23.228377 | orchestrator | 2025-05-29 00:58:23.228382 | orchestrator | TASK [ceph-handler : include check_socket_non_container.yml] ******************* 2025-05-29 00:58:23.228386 | orchestrator | Thursday 29 May 2025 00:51:09 +0000 (0:00:01.049) 0:05:53.868 ********** 2025-05-29 00:58:23.228391 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.228395 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.228400 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.228404 | orchestrator | 2025-05-29 00:58:23.228409 | orchestrator | TASK [ceph-handler : set_fact handler_mon_status] ****************************** 2025-05-29 00:58:23.228417 | orchestrator | Thursday 29 May 2025 00:51:09 +0000 (0:00:00.321) 0:05:54.190 ********** 2025-05-29 00:58:23.228422 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:58:23.228426 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:58:23.228431 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:58:23.228435 | orchestrator | 2025-05-29 00:58:23.228440 | orchestrator | TASK [ceph-handler : set_fact handler_osd_status] ****************************** 2025-05-29 00:58:23.228444 | orchestrator | Thursday 29 May 2025 00:51:09 +0000 (0:00:00.310) 0:05:54.500 ********** 2025-05-29 00:58:23.228449 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.228453 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.228458 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.228462 | orchestrator | 2025-05-29 00:58:23.228467 | orchestrator | TASK [ceph-handler : set_fact handler_mds_status] ****************************** 2025-05-29 00:58:23.228474 | orchestrator | Thursday 29 May 2025 00:51:10 +0000 (0:00:00.301) 0:05:54.801 ********** 2025-05-29 00:58:23.228478 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.228483 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.228487 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.228492 | orchestrator | 2025-05-29 00:58:23.228496 | orchestrator | TASK [ceph-handler : set_fact handler_rgw_status] ****************************** 2025-05-29 00:58:23.228501 | orchestrator | Thursday 29 May 2025 00:51:10 +0000 (0:00:00.454) 0:05:55.255 ********** 2025-05-29 00:58:23.228505 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.228510 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.228514 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.228519 | orchestrator | 2025-05-29 00:58:23.228524 | orchestrator | TASK [ceph-handler : set_fact handler_nfs_status] ****************************** 2025-05-29 00:58:23.228528 | orchestrator | Thursday 29 May 2025 00:51:10 +0000 (0:00:00.287) 0:05:55.542 ********** 2025-05-29 00:58:23.228533 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.228537 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.228541 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.228546 | orchestrator | 2025-05-29 00:58:23.228550 | orchestrator | TASK [ceph-handler : set_fact handler_rbd_status] ****************************** 2025-05-29 00:58:23.228555 | orchestrator | Thursday 29 May 2025 00:51:11 +0000 (0:00:00.284) 0:05:55.827 ********** 2025-05-29 00:58:23.228559 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.228564 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.228568 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.228573 | orchestrator | 2025-05-29 00:58:23.228577 | orchestrator | TASK [ceph-handler : set_fact handler_mgr_status] ****************************** 2025-05-29 00:58:23.228582 | orchestrator | Thursday 29 May 2025 00:51:11 +0000 (0:00:00.295) 0:05:56.122 ********** 2025-05-29 00:58:23.228586 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:58:23.228591 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:58:23.228595 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:58:23.228600 | orchestrator | 2025-05-29 00:58:23.228604 | orchestrator | TASK [ceph-handler : set_fact handler_crash_status] **************************** 2025-05-29 00:58:23.228609 | orchestrator | Thursday 29 May 2025 00:51:12 +0000 (0:00:00.602) 0:05:56.725 ********** 2025-05-29 00:58:23.228613 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:58:23.228617 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:58:23.228622 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:58:23.228626 | orchestrator | 2025-05-29 00:58:23.228631 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-05-29 00:58:23.228635 | orchestrator | Thursday 29 May 2025 00:51:12 +0000 (0:00:00.294) 0:05:57.020 ********** 2025-05-29 00:58:23.228640 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.228644 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.228649 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.228653 | orchestrator | 2025-05-29 00:58:23.228658 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-05-29 00:58:23.228662 | orchestrator | Thursday 29 May 2025 00:51:12 +0000 (0:00:00.334) 0:05:57.355 ********** 2025-05-29 00:58:23.228670 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.228674 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.228679 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.228683 | orchestrator | 2025-05-29 00:58:23.228688 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-05-29 00:58:23.228692 | orchestrator | Thursday 29 May 2025 00:51:13 +0000 (0:00:00.305) 0:05:57.661 ********** 2025-05-29 00:58:23.228697 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.228701 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.228706 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.228710 | orchestrator | 2025-05-29 00:58:23.228715 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-05-29 00:58:23.228719 | orchestrator | Thursday 29 May 2025 00:51:13 +0000 (0:00:00.471) 0:05:58.132 ********** 2025-05-29 00:58:23.228737 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.228742 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.228747 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.228751 | orchestrator | 2025-05-29 00:58:23.228756 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-05-29 00:58:23.228760 | orchestrator | Thursday 29 May 2025 00:51:13 +0000 (0:00:00.309) 0:05:58.441 ********** 2025-05-29 00:58:23.228765 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.228769 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.228774 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.228778 | orchestrator | 2025-05-29 00:58:23.228783 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-05-29 00:58:23.228787 | orchestrator | Thursday 29 May 2025 00:51:14 +0000 (0:00:00.295) 0:05:58.737 ********** 2025-05-29 00:58:23.228792 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.228796 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.228801 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.228805 | orchestrator | 2025-05-29 00:58:23.228810 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-05-29 00:58:23.228814 | orchestrator | Thursday 29 May 2025 00:51:14 +0000 (0:00:00.287) 0:05:59.024 ********** 2025-05-29 00:58:23.228819 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.228823 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.228828 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.228832 | orchestrator | 2025-05-29 00:58:23.228837 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-05-29 00:58:23.228841 | orchestrator | Thursday 29 May 2025 00:51:14 +0000 (0:00:00.530) 0:05:59.555 ********** 2025-05-29 00:58:23.228846 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.228850 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.228855 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.228859 | orchestrator | 2025-05-29 00:58:23.228864 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-05-29 00:58:23.228869 | orchestrator | Thursday 29 May 2025 00:51:15 +0000 (0:00:00.389) 0:05:59.945 ********** 2025-05-29 00:58:23.228873 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.228877 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.228882 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.228886 | orchestrator | 2025-05-29 00:58:23.228894 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-05-29 00:58:23.228899 | orchestrator | Thursday 29 May 2025 00:51:15 +0000 (0:00:00.324) 0:06:00.270 ********** 2025-05-29 00:58:23.228903 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.228908 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.228912 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.228917 | orchestrator | 2025-05-29 00:58:23.228921 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-05-29 00:58:23.228930 | orchestrator | Thursday 29 May 2025 00:51:15 +0000 (0:00:00.327) 0:06:00.597 ********** 2025-05-29 00:58:23.228934 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.228939 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.228943 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.228948 | orchestrator | 2025-05-29 00:58:23.228952 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-05-29 00:58:23.228957 | orchestrator | Thursday 29 May 2025 00:51:16 +0000 (0:00:00.624) 0:06:01.222 ********** 2025-05-29 00:58:23.228961 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.228966 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.228970 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.228975 | orchestrator | 2025-05-29 00:58:23.228979 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-05-29 00:58:23.228984 | orchestrator | Thursday 29 May 2025 00:51:16 +0000 (0:00:00.355) 0:06:01.577 ********** 2025-05-29 00:58:23.228988 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-05-29 00:58:23.228993 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-05-29 00:58:23.228997 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.229002 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-05-29 00:58:23.229006 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-05-29 00:58:23.229011 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.229015 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-05-29 00:58:23.229020 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-05-29 00:58:23.229024 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.229029 | orchestrator | 2025-05-29 00:58:23.229033 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-05-29 00:58:23.229038 | orchestrator | Thursday 29 May 2025 00:51:17 +0000 (0:00:00.370) 0:06:01.948 ********** 2025-05-29 00:58:23.229042 | orchestrator | skipping: [testbed-node-0] => (item=osd memory target)  2025-05-29 00:58:23.229047 | orchestrator | skipping: [testbed-node-0] => (item=osd_memory_target)  2025-05-29 00:58:23.229051 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.229056 | orchestrator | skipping: [testbed-node-1] => (item=osd memory target)  2025-05-29 00:58:23.229060 | orchestrator | skipping: [testbed-node-1] => (item=osd_memory_target)  2025-05-29 00:58:23.229065 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.229069 | orchestrator | skipping: [testbed-node-2] => (item=osd memory target)  2025-05-29 00:58:23.229074 | orchestrator | skipping: [testbed-node-2] => (item=osd_memory_target)  2025-05-29 00:58:23.229078 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.229083 | orchestrator | 2025-05-29 00:58:23.229087 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-05-29 00:58:23.229092 | orchestrator | Thursday 29 May 2025 00:51:17 +0000 (0:00:00.357) 0:06:02.305 ********** 2025-05-29 00:58:23.229100 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.229108 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.229115 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.229122 | orchestrator | 2025-05-29 00:58:23.229129 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-05-29 00:58:23.229154 | orchestrator | Thursday 29 May 2025 00:51:18 +0000 (0:00:00.642) 0:06:02.947 ********** 2025-05-29 00:58:23.229162 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.229169 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.229177 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.229185 | orchestrator | 2025-05-29 00:58:23.229193 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-05-29 00:58:23.229200 | orchestrator | Thursday 29 May 2025 00:51:18 +0000 (0:00:00.341) 0:06:03.288 ********** 2025-05-29 00:58:23.229208 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.229229 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.229234 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.229244 | orchestrator | 2025-05-29 00:58:23.229248 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-05-29 00:58:23.229253 | orchestrator | Thursday 29 May 2025 00:51:18 +0000 (0:00:00.339) 0:06:03.628 ********** 2025-05-29 00:58:23.229258 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.229262 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.229267 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.229271 | orchestrator | 2025-05-29 00:58:23.229276 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-05-29 00:58:23.229280 | orchestrator | Thursday 29 May 2025 00:51:19 +0000 (0:00:00.598) 0:06:04.227 ********** 2025-05-29 00:58:23.229285 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.229290 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.229295 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.229299 | orchestrator | 2025-05-29 00:58:23.229304 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-05-29 00:58:23.229308 | orchestrator | Thursday 29 May 2025 00:51:19 +0000 (0:00:00.329) 0:06:04.556 ********** 2025-05-29 00:58:23.229313 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.229318 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.229322 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.229327 | orchestrator | 2025-05-29 00:58:23.229332 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-05-29 00:58:23.229336 | orchestrator | Thursday 29 May 2025 00:51:20 +0000 (0:00:00.373) 0:06:04.930 ********** 2025-05-29 00:58:23.229341 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-29 00:58:23.229349 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-29 00:58:23.229354 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-29 00:58:23.229358 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.229363 | orchestrator | 2025-05-29 00:58:23.229368 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-05-29 00:58:23.229373 | orchestrator | Thursday 29 May 2025 00:51:20 +0000 (0:00:00.420) 0:06:05.351 ********** 2025-05-29 00:58:23.229377 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-29 00:58:23.229382 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-29 00:58:23.229387 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-29 00:58:23.229391 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.229396 | orchestrator | 2025-05-29 00:58:23.229401 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-05-29 00:58:23.229405 | orchestrator | Thursday 29 May 2025 00:51:21 +0000 (0:00:00.434) 0:06:05.785 ********** 2025-05-29 00:58:23.229410 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-29 00:58:23.229415 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-29 00:58:23.229419 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-29 00:58:23.229424 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.229429 | orchestrator | 2025-05-29 00:58:23.229433 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-29 00:58:23.229438 | orchestrator | Thursday 29 May 2025 00:51:21 +0000 (0:00:00.418) 0:06:06.203 ********** 2025-05-29 00:58:23.229442 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.229447 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.229452 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.229456 | orchestrator | 2025-05-29 00:58:23.229461 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-05-29 00:58:23.229466 | orchestrator | Thursday 29 May 2025 00:51:22 +0000 (0:00:00.643) 0:06:06.847 ********** 2025-05-29 00:58:23.229470 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-05-29 00:58:23.229475 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.229479 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-05-29 00:58:23.229488 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.229492 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-05-29 00:58:23.229497 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.229502 | orchestrator | 2025-05-29 00:58:23.229506 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-05-29 00:58:23.229511 | orchestrator | Thursday 29 May 2025 00:51:22 +0000 (0:00:00.486) 0:06:07.333 ********** 2025-05-29 00:58:23.229515 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.229520 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.229525 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.229529 | orchestrator | 2025-05-29 00:58:23.229535 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-29 00:58:23.229543 | orchestrator | Thursday 29 May 2025 00:51:23 +0000 (0:00:00.457) 0:06:07.791 ********** 2025-05-29 00:58:23.229552 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.229564 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.229571 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.229578 | orchestrator | 2025-05-29 00:58:23.229585 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-05-29 00:58:23.229593 | orchestrator | Thursday 29 May 2025 00:51:23 +0000 (0:00:00.383) 0:06:08.175 ********** 2025-05-29 00:58:23.229600 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-05-29 00:58:23.229607 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.229615 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-05-29 00:58:23.229623 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.229654 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-05-29 00:58:23.229660 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.229664 | orchestrator | 2025-05-29 00:58:23.229669 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-05-29 00:58:23.229674 | orchestrator | Thursday 29 May 2025 00:51:24 +0000 (0:00:01.145) 0:06:09.321 ********** 2025-05-29 00:58:23.229679 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.229683 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.229688 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.229692 | orchestrator | 2025-05-29 00:58:23.229697 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-05-29 00:58:23.229701 | orchestrator | Thursday 29 May 2025 00:51:25 +0000 (0:00:00.356) 0:06:09.678 ********** 2025-05-29 00:58:23.229706 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-29 00:58:23.229710 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-29 00:58:23.229715 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-29 00:58:23.229719 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.229724 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-05-29 00:58:23.229728 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-05-29 00:58:23.229733 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-05-29 00:58:23.229737 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.229742 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-05-29 00:58:23.229746 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-05-29 00:58:23.229751 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-05-29 00:58:23.229755 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.229759 | orchestrator | 2025-05-29 00:58:23.229764 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-05-29 00:58:23.229769 | orchestrator | Thursday 29 May 2025 00:51:25 +0000 (0:00:00.608) 0:06:10.286 ********** 2025-05-29 00:58:23.229773 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.229777 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.229782 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.229786 | orchestrator | 2025-05-29 00:58:23.229794 | orchestrator | TASK [ceph-rgw : create rgw keyrings] ****************************************** 2025-05-29 00:58:23.229806 | orchestrator | Thursday 29 May 2025 00:51:26 +0000 (0:00:00.898) 0:06:11.185 ********** 2025-05-29 00:58:23.229811 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.229816 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.229820 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.229825 | orchestrator | 2025-05-29 00:58:23.229829 | orchestrator | TASK [ceph-rgw : include_tasks multisite] ************************************** 2025-05-29 00:58:23.229834 | orchestrator | Thursday 29 May 2025 00:51:27 +0000 (0:00:00.562) 0:06:11.748 ********** 2025-05-29 00:58:23.229838 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.229843 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.229847 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.229852 | orchestrator | 2025-05-29 00:58:23.229856 | orchestrator | TASK [ceph-handler : set_fact multisite_called_from_handler_role] ************** 2025-05-29 00:58:23.229861 | orchestrator | Thursday 29 May 2025 00:51:28 +0000 (0:00:00.893) 0:06:12.641 ********** 2025-05-29 00:58:23.229865 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.229870 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.229874 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.229879 | orchestrator | 2025-05-29 00:58:23.229883 | orchestrator | TASK [ceph-mgr : set_fact container_exec_cmd] ********************************** 2025-05-29 00:58:23.229888 | orchestrator | Thursday 29 May 2025 00:51:28 +0000 (0:00:00.556) 0:06:13.198 ********** 2025-05-29 00:58:23.229892 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-29 00:58:23.229897 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-29 00:58:23.229902 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-29 00:58:23.229906 | orchestrator | 2025-05-29 00:58:23.229911 | orchestrator | TASK [ceph-mgr : include common.yml] ******************************************* 2025-05-29 00:58:23.229915 | orchestrator | Thursday 29 May 2025 00:51:29 +0000 (0:00:01.279) 0:06:14.478 ********** 2025-05-29 00:58:23.229919 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/common.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-29 00:58:23.229924 | orchestrator | 2025-05-29 00:58:23.229928 | orchestrator | TASK [ceph-mgr : create mgr directory] ***************************************** 2025-05-29 00:58:23.229933 | orchestrator | Thursday 29 May 2025 00:51:30 +0000 (0:00:00.628) 0:06:15.106 ********** 2025-05-29 00:58:23.229937 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:58:23.229942 | orchestrator | changed: [testbed-node-1] 2025-05-29 00:58:23.229946 | orchestrator | changed: [testbed-node-2] 2025-05-29 00:58:23.229951 | orchestrator | 2025-05-29 00:58:23.229955 | orchestrator | TASK [ceph-mgr : fetch ceph mgr keyring] *************************************** 2025-05-29 00:58:23.229960 | orchestrator | Thursday 29 May 2025 00:51:31 +0000 (0:00:00.755) 0:06:15.862 ********** 2025-05-29 00:58:23.229964 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.229969 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.229973 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.229978 | orchestrator | 2025-05-29 00:58:23.229982 | orchestrator | TASK [ceph-mgr : create ceph mgr keyring(s) on a mon node] ********************* 2025-05-29 00:58:23.229987 | orchestrator | Thursday 29 May 2025 00:51:31 +0000 (0:00:00.544) 0:06:16.406 ********** 2025-05-29 00:58:23.229991 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-05-29 00:58:23.229996 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-05-29 00:58:23.230000 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-05-29 00:58:23.230005 | orchestrator | changed: [testbed-node-0 -> {{ groups[mon_group_name][0] }}] 2025-05-29 00:58:23.230010 | orchestrator | 2025-05-29 00:58:23.230031 | orchestrator | TASK [ceph-mgr : set_fact _mgr_keys] ******************************************* 2025-05-29 00:58:23.230036 | orchestrator | Thursday 29 May 2025 00:51:39 +0000 (0:00:07.752) 0:06:24.159 ********** 2025-05-29 00:58:23.230055 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:58:23.230060 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:58:23.230068 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:58:23.230073 | orchestrator | 2025-05-29 00:58:23.230078 | orchestrator | TASK [ceph-mgr : get keys from monitors] *************************************** 2025-05-29 00:58:23.230082 | orchestrator | Thursday 29 May 2025 00:51:39 +0000 (0:00:00.379) 0:06:24.539 ********** 2025-05-29 00:58:23.230087 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-05-29 00:58:23.230091 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-05-29 00:58:23.230096 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-05-29 00:58:23.230100 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-05-29 00:58:23.230105 | orchestrator | ok: [testbed-node-1 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-29 00:58:23.230109 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-29 00:58:23.230114 | orchestrator | 2025-05-29 00:58:23.230118 | orchestrator | TASK [ceph-mgr : copy ceph key(s) if needed] *********************************** 2025-05-29 00:58:23.230123 | orchestrator | Thursday 29 May 2025 00:51:41 +0000 (0:00:02.092) 0:06:26.632 ********** 2025-05-29 00:58:23.230127 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-05-29 00:58:23.230132 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-05-29 00:58:23.230137 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-05-29 00:58:23.230141 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-05-29 00:58:23.230146 | orchestrator | changed: [testbed-node-1] => (item=None) 2025-05-29 00:58:23.230150 | orchestrator | changed: [testbed-node-2] => (item=None) 2025-05-29 00:58:23.230154 | orchestrator | 2025-05-29 00:58:23.230159 | orchestrator | TASK [ceph-mgr : set mgr key permissions] ************************************** 2025-05-29 00:58:23.230164 | orchestrator | Thursday 29 May 2025 00:51:43 +0000 (0:00:01.260) 0:06:27.892 ********** 2025-05-29 00:58:23.230168 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:58:23.230173 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:58:23.230177 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:58:23.230182 | orchestrator | 2025-05-29 00:58:23.230186 | orchestrator | TASK [ceph-mgr : append dashboard modules to ceph_mgr_modules] ***************** 2025-05-29 00:58:23.230191 | orchestrator | Thursday 29 May 2025 00:51:43 +0000 (0:00:00.681) 0:06:28.574 ********** 2025-05-29 00:58:23.230198 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.230233 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.230242 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.230249 | orchestrator | 2025-05-29 00:58:23.230256 | orchestrator | TASK [ceph-mgr : include pre_requisite.yml] ************************************ 2025-05-29 00:58:23.230263 | orchestrator | Thursday 29 May 2025 00:51:44 +0000 (0:00:00.565) 0:06:29.139 ********** 2025-05-29 00:58:23.230269 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.230276 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.230283 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.230289 | orchestrator | 2025-05-29 00:58:23.230296 | orchestrator | TASK [ceph-mgr : include start_mgr.yml] **************************************** 2025-05-29 00:58:23.230303 | orchestrator | Thursday 29 May 2025 00:51:44 +0000 (0:00:00.321) 0:06:29.461 ********** 2025-05-29 00:58:23.230311 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/start_mgr.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-29 00:58:23.230318 | orchestrator | 2025-05-29 00:58:23.230326 | orchestrator | TASK [ceph-mgr : ensure systemd service override directory exists] ************* 2025-05-29 00:58:23.230333 | orchestrator | Thursday 29 May 2025 00:51:45 +0000 (0:00:00.577) 0:06:30.039 ********** 2025-05-29 00:58:23.230340 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.230348 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.230355 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.230359 | orchestrator | 2025-05-29 00:58:23.230364 | orchestrator | TASK [ceph-mgr : add ceph-mgr systemd service overrides] *********************** 2025-05-29 00:58:23.230369 | orchestrator | Thursday 29 May 2025 00:51:46 +0000 (0:00:00.629) 0:06:30.669 ********** 2025-05-29 00:58:23.230373 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.230383 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.230388 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.230392 | orchestrator | 2025-05-29 00:58:23.230397 | orchestrator | TASK [ceph-mgr : include_tasks systemd.yml] ************************************ 2025-05-29 00:58:23.230401 | orchestrator | Thursday 29 May 2025 00:51:46 +0000 (0:00:00.357) 0:06:31.027 ********** 2025-05-29 00:58:23.230406 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-29 00:58:23.230411 | orchestrator | 2025-05-29 00:58:23.230415 | orchestrator | TASK [ceph-mgr : generate systemd unit file] *********************************** 2025-05-29 00:58:23.230420 | orchestrator | Thursday 29 May 2025 00:51:46 +0000 (0:00:00.588) 0:06:31.616 ********** 2025-05-29 00:58:23.230424 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:58:23.230429 | orchestrator | changed: [testbed-node-1] 2025-05-29 00:58:23.230433 | orchestrator | changed: [testbed-node-2] 2025-05-29 00:58:23.230438 | orchestrator | 2025-05-29 00:58:23.230443 | orchestrator | TASK [ceph-mgr : generate systemd ceph-mgr target file] ************************ 2025-05-29 00:58:23.230447 | orchestrator | Thursday 29 May 2025 00:51:48 +0000 (0:00:01.486) 0:06:33.102 ********** 2025-05-29 00:58:23.230452 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:58:23.230456 | orchestrator | changed: [testbed-node-1] 2025-05-29 00:58:23.230461 | orchestrator | changed: [testbed-node-2] 2025-05-29 00:58:23.230466 | orchestrator | 2025-05-29 00:58:23.230470 | orchestrator | TASK [ceph-mgr : enable ceph-mgr.target] *************************************** 2025-05-29 00:58:23.230475 | orchestrator | Thursday 29 May 2025 00:51:49 +0000 (0:00:01.147) 0:06:34.250 ********** 2025-05-29 00:58:23.230479 | orchestrator | changed: [testbed-node-1] 2025-05-29 00:58:23.230484 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:58:23.230488 | orchestrator | changed: [testbed-node-2] 2025-05-29 00:58:23.230493 | orchestrator | 2025-05-29 00:58:23.230498 | orchestrator | TASK [ceph-mgr : systemd start mgr] ******************************************** 2025-05-29 00:58:23.230502 | orchestrator | Thursday 29 May 2025 00:51:51 +0000 (0:00:01.708) 0:06:35.958 ********** 2025-05-29 00:58:23.230507 | orchestrator | changed: [testbed-node-1] 2025-05-29 00:58:23.230512 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:58:23.230537 | orchestrator | changed: [testbed-node-2] 2025-05-29 00:58:23.230543 | orchestrator | 2025-05-29 00:58:23.230547 | orchestrator | TASK [ceph-mgr : include mgr_modules.yml] ************************************** 2025-05-29 00:58:23.230552 | orchestrator | Thursday 29 May 2025 00:51:53 +0000 (0:00:02.262) 0:06:38.221 ********** 2025-05-29 00:58:23.230556 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.230561 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.230565 | orchestrator | included: /ansible/roles/ceph-mgr/tasks/mgr_modules.yml for testbed-node-2 2025-05-29 00:58:23.230570 | orchestrator | 2025-05-29 00:58:23.230574 | orchestrator | TASK [ceph-mgr : wait for all mgr to be up] ************************************ 2025-05-29 00:58:23.230579 | orchestrator | Thursday 29 May 2025 00:51:54 +0000 (0:00:00.728) 0:06:38.949 ********** 2025-05-29 00:58:23.230583 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: wait for all mgr to be up (30 retries left). 2025-05-29 00:58:23.230588 | orchestrator | FAILED - RETRYING: [testbed-node-2 -> testbed-node-0]: wait for all mgr to be up (29 retries left). 2025-05-29 00:58:23.230592 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-05-29 00:58:23.230597 | orchestrator | 2025-05-29 00:58:23.230601 | orchestrator | TASK [ceph-mgr : get enabled modules from ceph-mgr] **************************** 2025-05-29 00:58:23.230606 | orchestrator | Thursday 29 May 2025 00:52:07 +0000 (0:00:13.348) 0:06:52.297 ********** 2025-05-29 00:58:23.230610 | orchestrator | ok: [testbed-node-2 -> testbed-node-0(192.168.16.10)] 2025-05-29 00:58:23.230615 | orchestrator | 2025-05-29 00:58:23.230620 | orchestrator | TASK [ceph-mgr : set _ceph_mgr_modules fact (convert _ceph_mgr_modules.stdout to a dict)] *** 2025-05-29 00:58:23.230624 | orchestrator | Thursday 29 May 2025 00:52:09 +0000 (0:00:01.783) 0:06:54.081 ********** 2025-05-29 00:58:23.230628 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:58:23.230637 | orchestrator | 2025-05-29 00:58:23.230641 | orchestrator | TASK [ceph-mgr : set _disabled_ceph_mgr_modules fact] ************************** 2025-05-29 00:58:23.230646 | orchestrator | Thursday 29 May 2025 00:52:09 +0000 (0:00:00.442) 0:06:54.523 ********** 2025-05-29 00:58:23.230650 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:58:23.230655 | orchestrator | 2025-05-29 00:58:23.230660 | orchestrator | TASK [ceph-mgr : disable ceph mgr enabled modules] ***************************** 2025-05-29 00:58:23.230668 | orchestrator | Thursday 29 May 2025 00:52:10 +0000 (0:00:00.298) 0:06:54.822 ********** 2025-05-29 00:58:23.230672 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=iostat) 2025-05-29 00:58:23.230677 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=nfs) 2025-05-29 00:58:23.230681 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=restful) 2025-05-29 00:58:23.230686 | orchestrator | 2025-05-29 00:58:23.230690 | orchestrator | TASK [ceph-mgr : add modules to ceph-mgr] ************************************** 2025-05-29 00:58:23.230695 | orchestrator | Thursday 29 May 2025 00:52:16 +0000 (0:00:06.525) 0:07:01.347 ********** 2025-05-29 00:58:23.230699 | orchestrator | skipping: [testbed-node-2] => (item=balancer)  2025-05-29 00:58:23.230704 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=dashboard) 2025-05-29 00:58:23.230708 | orchestrator | changed: [testbed-node-2 -> testbed-node-0(192.168.16.10)] => (item=prometheus) 2025-05-29 00:58:23.230713 | orchestrator | skipping: [testbed-node-2] => (item=status)  2025-05-29 00:58:23.230717 | orchestrator | 2025-05-29 00:58:23.230722 | orchestrator | RUNNING HANDLER [ceph-handler : make tempdir for scripts] ********************** 2025-05-29 00:58:23.230726 | orchestrator | Thursday 29 May 2025 00:52:21 +0000 (0:00:05.081) 0:07:06.429 ********** 2025-05-29 00:58:23.230731 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:58:23.230735 | orchestrator | changed: [testbed-node-1] 2025-05-29 00:58:23.230740 | orchestrator | changed: [testbed-node-2] 2025-05-29 00:58:23.230744 | orchestrator | 2025-05-29 00:58:23.230749 | orchestrator | RUNNING HANDLER [ceph-handler : mgrs handler] ********************************** 2025-05-29 00:58:23.230753 | orchestrator | Thursday 29 May 2025 00:52:22 +0000 (0:00:00.690) 0:07:07.119 ********** 2025-05-29 00:58:23.230758 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mgrs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-29 00:58:23.230762 | orchestrator | 2025-05-29 00:58:23.230767 | orchestrator | RUNNING HANDLER [ceph-handler : set _mgr_handler_called before restart] ******** 2025-05-29 00:58:23.230771 | orchestrator | Thursday 29 May 2025 00:52:23 +0000 (0:00:00.828) 0:07:07.948 ********** 2025-05-29 00:58:23.230776 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:58:23.230780 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:58:23.230785 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:58:23.230789 | orchestrator | 2025-05-29 00:58:23.230794 | orchestrator | RUNNING HANDLER [ceph-handler : copy mgr restart script] *********************** 2025-05-29 00:58:23.230798 | orchestrator | Thursday 29 May 2025 00:52:23 +0000 (0:00:00.349) 0:07:08.297 ********** 2025-05-29 00:58:23.230803 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:58:23.230807 | orchestrator | changed: [testbed-node-1] 2025-05-29 00:58:23.230812 | orchestrator | changed: [testbed-node-2] 2025-05-29 00:58:23.230816 | orchestrator | 2025-05-29 00:58:23.230821 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph mgr daemon(s)] ******************** 2025-05-29 00:58:23.230825 | orchestrator | Thursday 29 May 2025 00:52:25 +0000 (0:00:01.549) 0:07:09.846 ********** 2025-05-29 00:58:23.230830 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-29 00:58:23.230834 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-29 00:58:23.230839 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-29 00:58:23.230843 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.230848 | orchestrator | 2025-05-29 00:58:23.230852 | orchestrator | RUNNING HANDLER [ceph-handler : set _mgr_handler_called after restart] ********* 2025-05-29 00:58:23.230857 | orchestrator | Thursday 29 May 2025 00:52:25 +0000 (0:00:00.675) 0:07:10.522 ********** 2025-05-29 00:58:23.230865 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:58:23.230870 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:58:23.230874 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:58:23.230879 | orchestrator | 2025-05-29 00:58:23.230896 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-05-29 00:58:23.230902 | orchestrator | Thursday 29 May 2025 00:52:26 +0000 (0:00:00.385) 0:07:10.908 ********** 2025-05-29 00:58:23.230906 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:58:23.230911 | orchestrator | changed: [testbed-node-1] 2025-05-29 00:58:23.230915 | orchestrator | changed: [testbed-node-2] 2025-05-29 00:58:23.230920 | orchestrator | 2025-05-29 00:58:23.230924 | orchestrator | PLAY [Apply role ceph-osd] ***************************************************** 2025-05-29 00:58:23.230929 | orchestrator | 2025-05-29 00:58:23.230933 | orchestrator | TASK [ceph-handler : include check_running_containers.yml] ********************* 2025-05-29 00:58:23.230938 | orchestrator | Thursday 29 May 2025 00:52:28 +0000 (0:00:02.161) 0:07:13.069 ********** 2025-05-29 00:58:23.230943 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-29 00:58:23.230947 | orchestrator | 2025-05-29 00:58:23.230952 | orchestrator | TASK [ceph-handler : check for a mon container] ******************************** 2025-05-29 00:58:23.230956 | orchestrator | Thursday 29 May 2025 00:52:29 +0000 (0:00:00.807) 0:07:13.876 ********** 2025-05-29 00:58:23.230961 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.230965 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.230970 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.230974 | orchestrator | 2025-05-29 00:58:23.230979 | orchestrator | TASK [ceph-handler : check for an osd container] ******************************* 2025-05-29 00:58:23.230983 | orchestrator | Thursday 29 May 2025 00:52:29 +0000 (0:00:00.355) 0:07:14.232 ********** 2025-05-29 00:58:23.230988 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:58:23.230992 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:58:23.230997 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:58:23.231001 | orchestrator | 2025-05-29 00:58:23.231006 | orchestrator | TASK [ceph-handler : check for a mds container] ******************************** 2025-05-29 00:58:23.231010 | orchestrator | Thursday 29 May 2025 00:52:30 +0000 (0:00:00.684) 0:07:14.916 ********** 2025-05-29 00:58:23.231015 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:58:23.231019 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:58:23.231024 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:58:23.231028 | orchestrator | 2025-05-29 00:58:23.231033 | orchestrator | TASK [ceph-handler : check for a rgw container] ******************************** 2025-05-29 00:58:23.231040 | orchestrator | Thursday 29 May 2025 00:52:31 +0000 (0:00:00.859) 0:07:15.776 ********** 2025-05-29 00:58:23.231045 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:58:23.231049 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:58:23.231054 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:58:23.231058 | orchestrator | 2025-05-29 00:58:23.231063 | orchestrator | TASK [ceph-handler : check for a mgr container] ******************************** 2025-05-29 00:58:23.231067 | orchestrator | Thursday 29 May 2025 00:52:31 +0000 (0:00:00.663) 0:07:16.439 ********** 2025-05-29 00:58:23.231072 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.231076 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.231081 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.231085 | orchestrator | 2025-05-29 00:58:23.231090 | orchestrator | TASK [ceph-handler : check for a rbd mirror container] ************************* 2025-05-29 00:58:23.231094 | orchestrator | Thursday 29 May 2025 00:52:32 +0000 (0:00:00.288) 0:07:16.727 ********** 2025-05-29 00:58:23.231099 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.231103 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.231108 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.231112 | orchestrator | 2025-05-29 00:58:23.231117 | orchestrator | TASK [ceph-handler : check for a nfs container] ******************************** 2025-05-29 00:58:23.231122 | orchestrator | Thursday 29 May 2025 00:52:32 +0000 (0:00:00.284) 0:07:17.012 ********** 2025-05-29 00:58:23.231129 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.231134 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.231138 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.231143 | orchestrator | 2025-05-29 00:58:23.231147 | orchestrator | TASK [ceph-handler : check for a tcmu-runner container] ************************ 2025-05-29 00:58:23.231152 | orchestrator | Thursday 29 May 2025 00:52:33 +0000 (0:00:00.635) 0:07:17.647 ********** 2025-05-29 00:58:23.231156 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.231161 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.231165 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.231170 | orchestrator | 2025-05-29 00:58:23.231175 | orchestrator | TASK [ceph-handler : check for a rbd-target-api container] ********************* 2025-05-29 00:58:23.231179 | orchestrator | Thursday 29 May 2025 00:52:33 +0000 (0:00:00.353) 0:07:18.001 ********** 2025-05-29 00:58:23.231184 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.231188 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.231193 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.231197 | orchestrator | 2025-05-29 00:58:23.231202 | orchestrator | TASK [ceph-handler : check for a rbd-target-gw container] ********************** 2025-05-29 00:58:23.231206 | orchestrator | Thursday 29 May 2025 00:52:33 +0000 (0:00:00.341) 0:07:18.342 ********** 2025-05-29 00:58:23.231244 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.231250 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.231255 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.231259 | orchestrator | 2025-05-29 00:58:23.231264 | orchestrator | TASK [ceph-handler : check for a ceph-crash container] ************************* 2025-05-29 00:58:23.231268 | orchestrator | Thursday 29 May 2025 00:52:34 +0000 (0:00:00.311) 0:07:18.654 ********** 2025-05-29 00:58:23.231273 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:58:23.231278 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:58:23.231282 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:58:23.231287 | orchestrator | 2025-05-29 00:58:23.231291 | orchestrator | TASK [ceph-handler : include check_socket_non_container.yml] ******************* 2025-05-29 00:58:23.231296 | orchestrator | Thursday 29 May 2025 00:52:35 +0000 (0:00:01.075) 0:07:19.730 ********** 2025-05-29 00:58:23.231301 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.231305 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.231310 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.231314 | orchestrator | 2025-05-29 00:58:23.231319 | orchestrator | TASK [ceph-handler : set_fact handler_mon_status] ****************************** 2025-05-29 00:58:23.231323 | orchestrator | Thursday 29 May 2025 00:52:35 +0000 (0:00:00.365) 0:07:20.095 ********** 2025-05-29 00:58:23.231327 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.231346 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.231351 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.231355 | orchestrator | 2025-05-29 00:58:23.231359 | orchestrator | TASK [ceph-handler : set_fact handler_osd_status] ****************************** 2025-05-29 00:58:23.231363 | orchestrator | Thursday 29 May 2025 00:52:35 +0000 (0:00:00.336) 0:07:20.431 ********** 2025-05-29 00:58:23.231367 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:58:23.231371 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:58:23.231376 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:58:23.231380 | orchestrator | 2025-05-29 00:58:23.231384 | orchestrator | TASK [ceph-handler : set_fact handler_mds_status] ****************************** 2025-05-29 00:58:23.231388 | orchestrator | Thursday 29 May 2025 00:52:36 +0000 (0:00:00.638) 0:07:21.070 ********** 2025-05-29 00:58:23.231392 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:58:23.231396 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:58:23.231400 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:58:23.231404 | orchestrator | 2025-05-29 00:58:23.231408 | orchestrator | TASK [ceph-handler : set_fact handler_rgw_status] ****************************** 2025-05-29 00:58:23.231412 | orchestrator | Thursday 29 May 2025 00:52:36 +0000 (0:00:00.347) 0:07:21.418 ********** 2025-05-29 00:58:23.231416 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:58:23.231424 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:58:23.231428 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:58:23.231432 | orchestrator | 2025-05-29 00:58:23.231437 | orchestrator | TASK [ceph-handler : set_fact handler_nfs_status] ****************************** 2025-05-29 00:58:23.231441 | orchestrator | Thursday 29 May 2025 00:52:37 +0000 (0:00:00.348) 0:07:21.766 ********** 2025-05-29 00:58:23.231445 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.231449 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.231453 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.231457 | orchestrator | 2025-05-29 00:58:23.231461 | orchestrator | TASK [ceph-handler : set_fact handler_rbd_status] ****************************** 2025-05-29 00:58:23.231465 | orchestrator | Thursday 29 May 2025 00:52:37 +0000 (0:00:00.317) 0:07:22.083 ********** 2025-05-29 00:58:23.231470 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.231474 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.231478 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.231482 | orchestrator | 2025-05-29 00:58:23.231486 | orchestrator | TASK [ceph-handler : set_fact handler_mgr_status] ****************************** 2025-05-29 00:58:23.231490 | orchestrator | Thursday 29 May 2025 00:52:37 +0000 (0:00:00.306) 0:07:22.389 ********** 2025-05-29 00:58:23.231497 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.231502 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.231506 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.231510 | orchestrator | 2025-05-29 00:58:23.231514 | orchestrator | TASK [ceph-handler : set_fact handler_crash_status] **************************** 2025-05-29 00:58:23.231518 | orchestrator | Thursday 29 May 2025 00:52:38 +0000 (0:00:00.623) 0:07:23.013 ********** 2025-05-29 00:58:23.231522 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:58:23.231526 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:58:23.231530 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:58:23.231535 | orchestrator | 2025-05-29 00:58:23.231539 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-05-29 00:58:23.231543 | orchestrator | Thursday 29 May 2025 00:52:38 +0000 (0:00:00.353) 0:07:23.367 ********** 2025-05-29 00:58:23.231547 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.231551 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.231555 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.231559 | orchestrator | 2025-05-29 00:58:23.231563 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-05-29 00:58:23.231568 | orchestrator | Thursday 29 May 2025 00:52:39 +0000 (0:00:00.330) 0:07:23.697 ********** 2025-05-29 00:58:23.231572 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.231576 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.231580 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.231584 | orchestrator | 2025-05-29 00:58:23.231588 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-05-29 00:58:23.231592 | orchestrator | Thursday 29 May 2025 00:52:39 +0000 (0:00:00.342) 0:07:24.039 ********** 2025-05-29 00:58:23.231596 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.231600 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.231605 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.231609 | orchestrator | 2025-05-29 00:58:23.231613 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-05-29 00:58:23.231617 | orchestrator | Thursday 29 May 2025 00:52:40 +0000 (0:00:00.654) 0:07:24.694 ********** 2025-05-29 00:58:23.231621 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.231625 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.231629 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.231633 | orchestrator | 2025-05-29 00:58:23.231637 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-05-29 00:58:23.231642 | orchestrator | Thursday 29 May 2025 00:52:40 +0000 (0:00:00.347) 0:07:25.042 ********** 2025-05-29 00:58:23.231646 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.231650 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.231657 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.231662 | orchestrator | 2025-05-29 00:58:23.231666 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-05-29 00:58:23.231670 | orchestrator | Thursday 29 May 2025 00:52:40 +0000 (0:00:00.324) 0:07:25.366 ********** 2025-05-29 00:58:23.231674 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.231678 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.231682 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.231686 | orchestrator | 2025-05-29 00:58:23.231690 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-05-29 00:58:23.231694 | orchestrator | Thursday 29 May 2025 00:52:41 +0000 (0:00:00.313) 0:07:25.680 ********** 2025-05-29 00:58:23.231699 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.231703 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.231707 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.231711 | orchestrator | 2025-05-29 00:58:23.231715 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-05-29 00:58:23.231719 | orchestrator | Thursday 29 May 2025 00:52:41 +0000 (0:00:00.691) 0:07:26.372 ********** 2025-05-29 00:58:23.231724 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.231739 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.231744 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.231748 | orchestrator | 2025-05-29 00:58:23.231752 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-05-29 00:58:23.231757 | orchestrator | Thursday 29 May 2025 00:52:42 +0000 (0:00:00.334) 0:07:26.707 ********** 2025-05-29 00:58:23.231761 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.231765 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.231769 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.231773 | orchestrator | 2025-05-29 00:58:23.231777 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-05-29 00:58:23.231781 | orchestrator | Thursday 29 May 2025 00:52:42 +0000 (0:00:00.343) 0:07:27.051 ********** 2025-05-29 00:58:23.231785 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.231789 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.231793 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.231797 | orchestrator | 2025-05-29 00:58:23.231802 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-05-29 00:58:23.231806 | orchestrator | Thursday 29 May 2025 00:52:42 +0000 (0:00:00.315) 0:07:27.367 ********** 2025-05-29 00:58:23.231810 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.231814 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.231818 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.231822 | orchestrator | 2025-05-29 00:58:23.231826 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-05-29 00:58:23.231830 | orchestrator | Thursday 29 May 2025 00:52:43 +0000 (0:00:00.695) 0:07:28.062 ********** 2025-05-29 00:58:23.231834 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.231838 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.231842 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.231846 | orchestrator | 2025-05-29 00:58:23.231851 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-05-29 00:58:23.231855 | orchestrator | Thursday 29 May 2025 00:52:43 +0000 (0:00:00.344) 0:07:28.406 ********** 2025-05-29 00:58:23.231859 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-05-29 00:58:23.231863 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-05-29 00:58:23.231869 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.231874 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-05-29 00:58:23.231878 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-05-29 00:58:23.231882 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.231886 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-05-29 00:58:23.231893 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-05-29 00:58:23.231897 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.231901 | orchestrator | 2025-05-29 00:58:23.231906 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-05-29 00:58:23.231910 | orchestrator | Thursday 29 May 2025 00:52:44 +0000 (0:00:00.408) 0:07:28.815 ********** 2025-05-29 00:58:23.231914 | orchestrator | skipping: [testbed-node-3] => (item=osd memory target)  2025-05-29 00:58:23.231918 | orchestrator | skipping: [testbed-node-3] => (item=osd_memory_target)  2025-05-29 00:58:23.231922 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.231926 | orchestrator | skipping: [testbed-node-4] => (item=osd memory target)  2025-05-29 00:58:23.231930 | orchestrator | skipping: [testbed-node-4] => (item=osd_memory_target)  2025-05-29 00:58:23.231934 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.231938 | orchestrator | skipping: [testbed-node-5] => (item=osd memory target)  2025-05-29 00:58:23.231943 | orchestrator | skipping: [testbed-node-5] => (item=osd_memory_target)  2025-05-29 00:58:23.231947 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.231951 | orchestrator | 2025-05-29 00:58:23.231955 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-05-29 00:58:23.231959 | orchestrator | Thursday 29 May 2025 00:52:44 +0000 (0:00:00.337) 0:07:29.152 ********** 2025-05-29 00:58:23.231963 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.231967 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.231971 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.231976 | orchestrator | 2025-05-29 00:58:23.231989 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-05-29 00:58:23.231994 | orchestrator | Thursday 29 May 2025 00:52:45 +0000 (0:00:00.755) 0:07:29.907 ********** 2025-05-29 00:58:23.231998 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.232002 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.232006 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.232010 | orchestrator | 2025-05-29 00:58:23.232015 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-05-29 00:58:23.232019 | orchestrator | Thursday 29 May 2025 00:52:45 +0000 (0:00:00.387) 0:07:30.295 ********** 2025-05-29 00:58:23.232023 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.232027 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.232031 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.232035 | orchestrator | 2025-05-29 00:58:23.232039 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-05-29 00:58:23.232043 | orchestrator | Thursday 29 May 2025 00:52:45 +0000 (0:00:00.323) 0:07:30.618 ********** 2025-05-29 00:58:23.232048 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.232052 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.232056 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.232060 | orchestrator | 2025-05-29 00:58:23.232064 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-05-29 00:58:23.232068 | orchestrator | Thursday 29 May 2025 00:52:46 +0000 (0:00:00.333) 0:07:30.952 ********** 2025-05-29 00:58:23.232072 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.232076 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.232080 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.232085 | orchestrator | 2025-05-29 00:58:23.232089 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-05-29 00:58:23.232106 | orchestrator | Thursday 29 May 2025 00:52:47 +0000 (0:00:00.692) 0:07:31.644 ********** 2025-05-29 00:58:23.232111 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.232115 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.232119 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.232123 | orchestrator | 2025-05-29 00:58:23.232127 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-05-29 00:58:23.232135 | orchestrator | Thursday 29 May 2025 00:52:47 +0000 (0:00:00.352) 0:07:31.996 ********** 2025-05-29 00:58:23.232139 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-29 00:58:23.232143 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-29 00:58:23.232147 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-29 00:58:23.232151 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.232155 | orchestrator | 2025-05-29 00:58:23.232159 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-05-29 00:58:23.232163 | orchestrator | Thursday 29 May 2025 00:52:47 +0000 (0:00:00.430) 0:07:32.427 ********** 2025-05-29 00:58:23.232168 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-29 00:58:23.232172 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-29 00:58:23.232176 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-29 00:58:23.232180 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.232184 | orchestrator | 2025-05-29 00:58:23.232188 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-05-29 00:58:23.232192 | orchestrator | Thursday 29 May 2025 00:52:48 +0000 (0:00:00.421) 0:07:32.848 ********** 2025-05-29 00:58:23.232197 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-29 00:58:23.232201 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-29 00:58:23.232205 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-29 00:58:23.232209 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.232228 | orchestrator | 2025-05-29 00:58:23.232233 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-29 00:58:23.232237 | orchestrator | Thursday 29 May 2025 00:52:48 +0000 (0:00:00.408) 0:07:33.257 ********** 2025-05-29 00:58:23.232241 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.232245 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.232254 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.232258 | orchestrator | 2025-05-29 00:58:23.232262 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-05-29 00:58:23.232267 | orchestrator | Thursday 29 May 2025 00:52:49 +0000 (0:00:00.675) 0:07:33.933 ********** 2025-05-29 00:58:23.232271 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-05-29 00:58:23.232275 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.232279 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-05-29 00:58:23.232283 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.232287 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-05-29 00:58:23.232291 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.232295 | orchestrator | 2025-05-29 00:58:23.232300 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-05-29 00:58:23.232306 | orchestrator | Thursday 29 May 2025 00:52:49 +0000 (0:00:00.523) 0:07:34.457 ********** 2025-05-29 00:58:23.232312 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.232319 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.232325 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.232331 | orchestrator | 2025-05-29 00:58:23.232338 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-29 00:58:23.232346 | orchestrator | Thursday 29 May 2025 00:52:50 +0000 (0:00:00.400) 0:07:34.857 ********** 2025-05-29 00:58:23.232350 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.232355 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.232359 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.232363 | orchestrator | 2025-05-29 00:58:23.232367 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-05-29 00:58:23.232371 | orchestrator | Thursday 29 May 2025 00:52:50 +0000 (0:00:00.330) 0:07:35.188 ********** 2025-05-29 00:58:23.232375 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-05-29 00:58:23.232380 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.232388 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-05-29 00:58:23.232392 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.232396 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-05-29 00:58:23.232400 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.232404 | orchestrator | 2025-05-29 00:58:23.232408 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-05-29 00:58:23.232413 | orchestrator | Thursday 29 May 2025 00:52:51 +0000 (0:00:00.816) 0:07:36.005 ********** 2025-05-29 00:58:23.232417 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-05-29 00:58:23.232421 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.232425 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-05-29 00:58:23.232430 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.232434 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-05-29 00:58:23.232438 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.232442 | orchestrator | 2025-05-29 00:58:23.232446 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-05-29 00:58:23.232451 | orchestrator | Thursday 29 May 2025 00:52:51 +0000 (0:00:00.339) 0:07:36.344 ********** 2025-05-29 00:58:23.232455 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-29 00:58:23.232459 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-29 00:58:23.232463 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-29 00:58:23.232483 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-05-29 00:58:23.232488 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-05-29 00:58:23.232492 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-05-29 00:58:23.232496 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.232500 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.232504 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-05-29 00:58:23.232508 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-05-29 00:58:23.232512 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-05-29 00:58:23.232516 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.232520 | orchestrator | 2025-05-29 00:58:23.232524 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-05-29 00:58:23.232528 | orchestrator | Thursday 29 May 2025 00:52:52 +0000 (0:00:00.572) 0:07:36.917 ********** 2025-05-29 00:58:23.232533 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.232537 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.232541 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.232545 | orchestrator | 2025-05-29 00:58:23.232549 | orchestrator | TASK [ceph-rgw : create rgw keyrings] ****************************************** 2025-05-29 00:58:23.232553 | orchestrator | Thursday 29 May 2025 00:52:52 +0000 (0:00:00.639) 0:07:37.556 ********** 2025-05-29 00:58:23.232557 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-29 00:58:23.232561 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.232565 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-05-29 00:58:23.232569 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.232573 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-05-29 00:58:23.232577 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.232581 | orchestrator | 2025-05-29 00:58:23.232585 | orchestrator | TASK [ceph-rgw : include_tasks multisite] ************************************** 2025-05-29 00:58:23.232590 | orchestrator | Thursday 29 May 2025 00:52:53 +0000 (0:00:00.541) 0:07:38.098 ********** 2025-05-29 00:58:23.232594 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.232598 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.232605 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.232610 | orchestrator | 2025-05-29 00:58:23.232614 | orchestrator | TASK [ceph-handler : set_fact multisite_called_from_handler_role] ************** 2025-05-29 00:58:23.232621 | orchestrator | Thursday 29 May 2025 00:52:54 +0000 (0:00:00.879) 0:07:38.977 ********** 2025-05-29 00:58:23.232625 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.232629 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.232633 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.232637 | orchestrator | 2025-05-29 00:58:23.232641 | orchestrator | TASK [ceph-osd : set_fact add_osd] ********************************************* 2025-05-29 00:58:23.232646 | orchestrator | Thursday 29 May 2025 00:52:54 +0000 (0:00:00.532) 0:07:39.510 ********** 2025-05-29 00:58:23.232650 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:58:23.232654 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:58:23.232658 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:58:23.232662 | orchestrator | 2025-05-29 00:58:23.232666 | orchestrator | TASK [ceph-osd : set_fact container_exec_cmd] ********************************** 2025-05-29 00:58:23.232670 | orchestrator | Thursday 29 May 2025 00:52:55 +0000 (0:00:00.635) 0:07:40.146 ********** 2025-05-29 00:58:23.232674 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-05-29 00:58:23.232678 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-29 00:58:23.232682 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-29 00:58:23.232686 | orchestrator | 2025-05-29 00:58:23.232691 | orchestrator | TASK [ceph-osd : include_tasks system_tuning.yml] ****************************** 2025-05-29 00:58:23.232695 | orchestrator | Thursday 29 May 2025 00:52:56 +0000 (0:00:00.710) 0:07:40.857 ********** 2025-05-29 00:58:23.232699 | orchestrator | included: /ansible/roles/ceph-osd/tasks/system_tuning.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-29 00:58:23.232703 | orchestrator | 2025-05-29 00:58:23.232707 | orchestrator | TASK [ceph-osd : disable osd directory parsing by updatedb] ******************** 2025-05-29 00:58:23.232711 | orchestrator | Thursday 29 May 2025 00:52:56 +0000 (0:00:00.564) 0:07:41.421 ********** 2025-05-29 00:58:23.232715 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.232719 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.232723 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.232728 | orchestrator | 2025-05-29 00:58:23.232732 | orchestrator | TASK [ceph-osd : disable osd directory path in updatedb.conf] ****************** 2025-05-29 00:58:23.232736 | orchestrator | Thursday 29 May 2025 00:52:57 +0000 (0:00:00.327) 0:07:41.748 ********** 2025-05-29 00:58:23.232740 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.232744 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.232748 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.232752 | orchestrator | 2025-05-29 00:58:23.232756 | orchestrator | TASK [ceph-osd : create tmpfiles.d directory] ********************************** 2025-05-29 00:58:23.232760 | orchestrator | Thursday 29 May 2025 00:52:57 +0000 (0:00:00.636) 0:07:42.384 ********** 2025-05-29 00:58:23.232764 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.232768 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.232772 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.232776 | orchestrator | 2025-05-29 00:58:23.232780 | orchestrator | TASK [ceph-osd : disable transparent hugepage] ********************************* 2025-05-29 00:58:23.232785 | orchestrator | Thursday 29 May 2025 00:52:58 +0000 (0:00:00.337) 0:07:42.722 ********** 2025-05-29 00:58:23.232789 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.232793 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.232797 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.232801 | orchestrator | 2025-05-29 00:58:23.232805 | orchestrator | TASK [ceph-osd : get default vm.min_free_kbytes] ******************************* 2025-05-29 00:58:23.232809 | orchestrator | Thursday 29 May 2025 00:52:58 +0000 (0:00:00.406) 0:07:43.129 ********** 2025-05-29 00:58:23.232813 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:58:23.232817 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:58:23.232825 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:58:23.232829 | orchestrator | 2025-05-29 00:58:23.232845 | orchestrator | TASK [ceph-osd : set_fact vm_min_free_kbytes] ********************************** 2025-05-29 00:58:23.232850 | orchestrator | Thursday 29 May 2025 00:52:59 +0000 (0:00:00.735) 0:07:43.865 ********** 2025-05-29 00:58:23.232854 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:58:23.232858 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:58:23.232862 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:58:23.232866 | orchestrator | 2025-05-29 00:58:23.232871 | orchestrator | TASK [ceph-osd : apply operating system tuning] ******************************** 2025-05-29 00:58:23.232875 | orchestrator | Thursday 29 May 2025 00:52:59 +0000 (0:00:00.680) 0:07:44.545 ********** 2025-05-29 00:58:23.232879 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-05-29 00:58:23.232883 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-05-29 00:58:23.232887 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.aio-max-nr', 'value': '1048576', 'enable': True}) 2025-05-29 00:58:23.232891 | orchestrator | changed: [testbed-node-5] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-05-29 00:58:23.232895 | orchestrator | changed: [testbed-node-3] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-05-29 00:58:23.232899 | orchestrator | changed: [testbed-node-4] => (item={'name': 'fs.file-max', 'value': 26234859}) 2025-05-29 00:58:23.232904 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-05-29 00:58:23.232908 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-05-29 00:58:23.232912 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.zone_reclaim_mode', 'value': 0}) 2025-05-29 00:58:23.232916 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-05-29 00:58:23.232920 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-05-29 00:58:23.232924 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.swappiness', 'value': 10}) 2025-05-29 00:58:23.232928 | orchestrator | changed: [testbed-node-5] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-05-29 00:58:23.232932 | orchestrator | changed: [testbed-node-4] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-05-29 00:58:23.232937 | orchestrator | changed: [testbed-node-3] => (item={'name': 'vm.min_free_kbytes', 'value': '67584'}) 2025-05-29 00:58:23.232941 | orchestrator | 2025-05-29 00:58:23.232945 | orchestrator | TASK [ceph-osd : install dependencies] ***************************************** 2025-05-29 00:58:23.232949 | orchestrator | Thursday 29 May 2025 00:53:02 +0000 (0:00:02.355) 0:07:46.901 ********** 2025-05-29 00:58:23.232954 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.232958 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.232962 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.232966 | orchestrator | 2025-05-29 00:58:23.232970 | orchestrator | TASK [ceph-osd : include_tasks common.yml] ************************************* 2025-05-29 00:58:23.232974 | orchestrator | Thursday 29 May 2025 00:53:02 +0000 (0:00:00.328) 0:07:47.229 ********** 2025-05-29 00:58:23.232978 | orchestrator | included: /ansible/roles/ceph-osd/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-29 00:58:23.232982 | orchestrator | 2025-05-29 00:58:23.232986 | orchestrator | TASK [ceph-osd : create bootstrap-osd and osd directories] ********************* 2025-05-29 00:58:23.232990 | orchestrator | Thursday 29 May 2025 00:53:03 +0000 (0:00:00.797) 0:07:48.027 ********** 2025-05-29 00:58:23.232995 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-osd/) 2025-05-29 00:58:23.232999 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-osd/) 2025-05-29 00:58:23.233003 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-osd/) 2025-05-29 00:58:23.233007 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/osd/) 2025-05-29 00:58:23.233015 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/osd/) 2025-05-29 00:58:23.233019 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/osd/) 2025-05-29 00:58:23.233023 | orchestrator | 2025-05-29 00:58:23.233027 | orchestrator | TASK [ceph-osd : get keys from monitors] *************************************** 2025-05-29 00:58:23.233031 | orchestrator | Thursday 29 May 2025 00:53:04 +0000 (0:00:00.994) 0:07:49.022 ********** 2025-05-29 00:58:23.233035 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-29 00:58:23.233039 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-29 00:58:23.233043 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-05-29 00:58:23.233047 | orchestrator | 2025-05-29 00:58:23.233051 | orchestrator | TASK [ceph-osd : copy ceph key(s) if needed] *********************************** 2025-05-29 00:58:23.233056 | orchestrator | Thursday 29 May 2025 00:53:06 +0000 (0:00:01.812) 0:07:50.835 ********** 2025-05-29 00:58:23.233060 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-05-29 00:58:23.233064 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-29 00:58:23.233068 | orchestrator | changed: [testbed-node-3] 2025-05-29 00:58:23.233072 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-05-29 00:58:23.233076 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-05-29 00:58:23.233080 | orchestrator | changed: [testbed-node-4] 2025-05-29 00:58:23.233084 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-05-29 00:58:23.233088 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-05-29 00:58:23.233092 | orchestrator | changed: [testbed-node-5] 2025-05-29 00:58:23.233097 | orchestrator | 2025-05-29 00:58:23.233101 | orchestrator | TASK [ceph-osd : set noup flag] ************************************************ 2025-05-29 00:58:23.233105 | orchestrator | Thursday 29 May 2025 00:53:07 +0000 (0:00:01.470) 0:07:52.305 ********** 2025-05-29 00:58:23.233121 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-05-29 00:58:23.233126 | orchestrator | 2025-05-29 00:58:23.233130 | orchestrator | TASK [ceph-osd : include container_options_facts.yml] ************************** 2025-05-29 00:58:23.233134 | orchestrator | Thursday 29 May 2025 00:53:09 +0000 (0:00:01.913) 0:07:54.219 ********** 2025-05-29 00:58:23.233138 | orchestrator | included: /ansible/roles/ceph-osd/tasks/container_options_facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-29 00:58:23.233142 | orchestrator | 2025-05-29 00:58:23.233146 | orchestrator | TASK [ceph-osd : set_fact container_env_args '-e osd_bluestore=0 -e osd_filestore=1 -e osd_dmcrypt=0'] *** 2025-05-29 00:58:23.233151 | orchestrator | Thursday 29 May 2025 00:53:10 +0000 (0:00:00.570) 0:07:54.789 ********** 2025-05-29 00:58:23.233155 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.233159 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.233163 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.233167 | orchestrator | 2025-05-29 00:58:23.233171 | orchestrator | TASK [ceph-osd : set_fact container_env_args '-e osd_bluestore=0 -e osd_filestore=1 -e osd_dmcrypt=1'] *** 2025-05-29 00:58:23.233175 | orchestrator | Thursday 29 May 2025 00:53:10 +0000 (0:00:00.575) 0:07:55.365 ********** 2025-05-29 00:58:23.233179 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.233183 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.233187 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.233191 | orchestrator | 2025-05-29 00:58:23.233195 | orchestrator | TASK [ceph-osd : set_fact container_env_args '-e osd_bluestore=1 -e osd_filestore=0 -e osd_dmcrypt=0'] *** 2025-05-29 00:58:23.233200 | orchestrator | Thursday 29 May 2025 00:53:11 +0000 (0:00:00.325) 0:07:55.690 ********** 2025-05-29 00:58:23.233204 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.233208 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.233388 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.233394 | orchestrator | 2025-05-29 00:58:23.233416 | orchestrator | TASK [ceph-osd : set_fact container_env_args '-e osd_bluestore=1 -e osd_filestore=0 -e osd_dmcrypt=1'] *** 2025-05-29 00:58:23.233421 | orchestrator | Thursday 29 May 2025 00:53:11 +0000 (0:00:00.324) 0:07:56.014 ********** 2025-05-29 00:58:23.233429 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:58:23.233433 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:58:23.233437 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:58:23.233442 | orchestrator | 2025-05-29 00:58:23.233448 | orchestrator | TASK [ceph-osd : include_tasks scenarios/lvm.yml] ****************************** 2025-05-29 00:58:23.233452 | orchestrator | Thursday 29 May 2025 00:53:11 +0000 (0:00:00.363) 0:07:56.378 ********** 2025-05-29 00:58:23.233456 | orchestrator | included: /ansible/roles/ceph-osd/tasks/scenarios/lvm.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-29 00:58:23.233460 | orchestrator | 2025-05-29 00:58:23.233465 | orchestrator | TASK [ceph-osd : use ceph-volume to create bluestore osds] ********************* 2025-05-29 00:58:23.233469 | orchestrator | Thursday 29 May 2025 00:53:12 +0000 (0:00:00.906) 0:07:57.284 ********** 2025-05-29 00:58:23.233473 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-a1850b6b-a1b4-57b7-9f5e-deb9029890df', 'data_vg': 'ceph-a1850b6b-a1b4-57b7-9f5e-deb9029890df'}) 2025-05-29 00:58:23.233478 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-b02a0e5a-ac94-54a1-88a1-38ba26e145f6', 'data_vg': 'ceph-b02a0e5a-ac94-54a1-88a1-38ba26e145f6'}) 2025-05-29 00:58:23.233482 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-2961dba5-5d3e-5262-aab3-a8717ef28b96', 'data_vg': 'ceph-2961dba5-5d3e-5262-aab3-a8717ef28b96'}) 2025-05-29 00:58:23.233486 | orchestrator | changed: [testbed-node-5] => (item={'data': 'osd-block-05ae814f-03ae-5777-aef4-91f0b0270e90', 'data_vg': 'ceph-05ae814f-03ae-5777-aef4-91f0b0270e90'}) 2025-05-29 00:58:23.233490 | orchestrator | changed: [testbed-node-3] => (item={'data': 'osd-block-81bd5020-0460-5411-80bb-35101e63cce8', 'data_vg': 'ceph-81bd5020-0460-5411-80bb-35101e63cce8'}) 2025-05-29 00:58:23.233494 | orchestrator | changed: [testbed-node-4] => (item={'data': 'osd-block-10c8172d-d6a1-5b27-956e-8c5bc818fcb1', 'data_vg': 'ceph-10c8172d-d6a1-5b27-956e-8c5bc818fcb1'}) 2025-05-29 00:58:23.233498 | orchestrator | 2025-05-29 00:58:23.233502 | orchestrator | TASK [ceph-osd : include_tasks scenarios/lvm-batch.yml] ************************ 2025-05-29 00:58:23.233507 | orchestrator | Thursday 29 May 2025 00:53:56 +0000 (0:00:44.318) 0:08:41.602 ********** 2025-05-29 00:58:23.233511 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.233515 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.233519 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.233523 | orchestrator | 2025-05-29 00:58:23.233527 | orchestrator | TASK [ceph-osd : include_tasks start_osds.yml] ********************************* 2025-05-29 00:58:23.233531 | orchestrator | Thursday 29 May 2025 00:53:57 +0000 (0:00:00.497) 0:08:42.099 ********** 2025-05-29 00:58:23.233535 | orchestrator | included: /ansible/roles/ceph-osd/tasks/start_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-29 00:58:23.233539 | orchestrator | 2025-05-29 00:58:23.233543 | orchestrator | TASK [ceph-osd : get osd ids] ************************************************** 2025-05-29 00:58:23.233547 | orchestrator | Thursday 29 May 2025 00:53:58 +0000 (0:00:00.570) 0:08:42.670 ********** 2025-05-29 00:58:23.233551 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:58:23.233555 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:58:23.233559 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:58:23.233563 | orchestrator | 2025-05-29 00:58:23.233568 | orchestrator | TASK [ceph-osd : collect osd ids] ********************************************** 2025-05-29 00:58:23.233572 | orchestrator | Thursday 29 May 2025 00:53:58 +0000 (0:00:00.623) 0:08:43.293 ********** 2025-05-29 00:58:23.233576 | orchestrator | changed: [testbed-node-3] 2025-05-29 00:58:23.233580 | orchestrator | changed: [testbed-node-4] 2025-05-29 00:58:23.233584 | orchestrator | changed: [testbed-node-5] 2025-05-29 00:58:23.233588 | orchestrator | 2025-05-29 00:58:23.233609 | orchestrator | TASK [ceph-osd : include_tasks systemd.yml] ************************************ 2025-05-29 00:58:23.233614 | orchestrator | Thursday 29 May 2025 00:54:00 +0000 (0:00:01.954) 0:08:45.248 ********** 2025-05-29 00:58:23.233618 | orchestrator | included: /ansible/roles/ceph-osd/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-29 00:58:23.233626 | orchestrator | 2025-05-29 00:58:23.233630 | orchestrator | TASK [ceph-osd : generate systemd unit file] *********************************** 2025-05-29 00:58:23.233635 | orchestrator | Thursday 29 May 2025 00:54:01 +0000 (0:00:00.549) 0:08:45.797 ********** 2025-05-29 00:58:23.233639 | orchestrator | changed: [testbed-node-3] 2025-05-29 00:58:23.233643 | orchestrator | changed: [testbed-node-4] 2025-05-29 00:58:23.233647 | orchestrator | changed: [testbed-node-5] 2025-05-29 00:58:23.233651 | orchestrator | 2025-05-29 00:58:23.233655 | orchestrator | TASK [ceph-osd : generate systemd ceph-osd target file] ************************ 2025-05-29 00:58:23.233659 | orchestrator | Thursday 29 May 2025 00:54:02 +0000 (0:00:01.157) 0:08:46.955 ********** 2025-05-29 00:58:23.233663 | orchestrator | changed: [testbed-node-3] 2025-05-29 00:58:23.233667 | orchestrator | changed: [testbed-node-4] 2025-05-29 00:58:23.233671 | orchestrator | changed: [testbed-node-5] 2025-05-29 00:58:23.233675 | orchestrator | 2025-05-29 00:58:23.233679 | orchestrator | TASK [ceph-osd : enable ceph-osd.target] *************************************** 2025-05-29 00:58:23.233683 | orchestrator | Thursday 29 May 2025 00:54:03 +0000 (0:00:01.438) 0:08:48.393 ********** 2025-05-29 00:58:23.233687 | orchestrator | changed: [testbed-node-4] 2025-05-29 00:58:23.233692 | orchestrator | changed: [testbed-node-3] 2025-05-29 00:58:23.233696 | orchestrator | changed: [testbed-node-5] 2025-05-29 00:58:23.233700 | orchestrator | 2025-05-29 00:58:23.233704 | orchestrator | TASK [ceph-osd : ensure systemd service override directory exists] ************* 2025-05-29 00:58:23.233708 | orchestrator | Thursday 29 May 2025 00:54:05 +0000 (0:00:01.723) 0:08:50.116 ********** 2025-05-29 00:58:23.233712 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.233716 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.233720 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.233724 | orchestrator | 2025-05-29 00:58:23.233728 | orchestrator | TASK [ceph-osd : add ceph-osd systemd service overrides] *********************** 2025-05-29 00:58:23.233732 | orchestrator | Thursday 29 May 2025 00:54:05 +0000 (0:00:00.423) 0:08:50.539 ********** 2025-05-29 00:58:23.233736 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.233740 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.233747 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.233752 | orchestrator | 2025-05-29 00:58:23.233756 | orchestrator | TASK [ceph-osd : ensure "/var/lib/ceph/osd/{{ cluster }}-{{ item }}" is present] *** 2025-05-29 00:58:23.233760 | orchestrator | Thursday 29 May 2025 00:54:06 +0000 (0:00:00.731) 0:08:51.271 ********** 2025-05-29 00:58:23.233764 | orchestrator | ok: [testbed-node-3] => (item=2) 2025-05-29 00:58:23.233768 | orchestrator | ok: [testbed-node-4] => (item=1) 2025-05-29 00:58:23.233772 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-05-29 00:58:23.233776 | orchestrator | ok: [testbed-node-3] => (item=4) 2025-05-29 00:58:23.233780 | orchestrator | ok: [testbed-node-4] => (item=5) 2025-05-29 00:58:23.233784 | orchestrator | ok: [testbed-node-5] => (item=3) 2025-05-29 00:58:23.233788 | orchestrator | 2025-05-29 00:58:23.233792 | orchestrator | TASK [ceph-osd : systemd start osd] ******************************************** 2025-05-29 00:58:23.233796 | orchestrator | Thursday 29 May 2025 00:54:07 +0000 (0:00:01.059) 0:08:52.330 ********** 2025-05-29 00:58:23.233801 | orchestrator | changed: [testbed-node-4] => (item=1) 2025-05-29 00:58:23.233805 | orchestrator | changed: [testbed-node-3] => (item=2) 2025-05-29 00:58:23.233809 | orchestrator | changed: [testbed-node-5] => (item=0) 2025-05-29 00:58:23.233813 | orchestrator | changed: [testbed-node-4] => (item=5) 2025-05-29 00:58:23.233817 | orchestrator | changed: [testbed-node-3] => (item=4) 2025-05-29 00:58:23.233821 | orchestrator | changed: [testbed-node-5] => (item=3) 2025-05-29 00:58:23.233825 | orchestrator | 2025-05-29 00:58:23.233829 | orchestrator | TASK [ceph-osd : unset noup flag] ********************************************** 2025-05-29 00:58:23.233833 | orchestrator | Thursday 29 May 2025 00:54:11 +0000 (0:00:03.486) 0:08:55.816 ********** 2025-05-29 00:58:23.233837 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.233841 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.233849 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-05-29 00:58:23.233853 | orchestrator | 2025-05-29 00:58:23.233857 | orchestrator | TASK [ceph-osd : wait for all osd to be up] ************************************ 2025-05-29 00:58:23.233861 | orchestrator | Thursday 29 May 2025 00:54:14 +0000 (0:00:02.845) 0:08:58.662 ********** 2025-05-29 00:58:23.233865 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.233869 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.233873 | orchestrator | FAILED - RETRYING: [testbed-node-5 -> testbed-node-0]: wait for all osd to be up (60 retries left). 2025-05-29 00:58:23.233877 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] 2025-05-29 00:58:23.233881 | orchestrator | 2025-05-29 00:58:23.233886 | orchestrator | TASK [ceph-osd : include crush_rules.yml] ************************************** 2025-05-29 00:58:23.233890 | orchestrator | Thursday 29 May 2025 00:54:26 +0000 (0:00:12.753) 0:09:11.415 ********** 2025-05-29 00:58:23.233894 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.233898 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.233902 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.233906 | orchestrator | 2025-05-29 00:58:23.233910 | orchestrator | TASK [ceph-osd : include openstack_config.yml] ********************************* 2025-05-29 00:58:23.233914 | orchestrator | Thursday 29 May 2025 00:54:27 +0000 (0:00:00.488) 0:09:11.904 ********** 2025-05-29 00:58:23.233918 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.233922 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.233926 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.233930 | orchestrator | 2025-05-29 00:58:23.233934 | orchestrator | RUNNING HANDLER [ceph-handler : make tempdir for scripts] ********************** 2025-05-29 00:58:23.233938 | orchestrator | Thursday 29 May 2025 00:54:28 +0000 (0:00:01.202) 0:09:13.106 ********** 2025-05-29 00:58:23.233943 | orchestrator | changed: [testbed-node-3] 2025-05-29 00:58:23.233947 | orchestrator | changed: [testbed-node-4] 2025-05-29 00:58:23.233951 | orchestrator | changed: [testbed-node-5] 2025-05-29 00:58:23.233955 | orchestrator | 2025-05-29 00:58:23.233959 | orchestrator | RUNNING HANDLER [ceph-handler : osds handler] ********************************** 2025-05-29 00:58:23.233975 | orchestrator | Thursday 29 May 2025 00:54:29 +0000 (0:00:00.709) 0:09:13.816 ********** 2025-05-29 00:58:23.233980 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_osds.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-29 00:58:23.233984 | orchestrator | 2025-05-29 00:58:23.233988 | orchestrator | RUNNING HANDLER [ceph-handler : set_fact trigger_restart] ********************** 2025-05-29 00:58:23.233992 | orchestrator | Thursday 29 May 2025 00:54:29 +0000 (0:00:00.797) 0:09:14.614 ********** 2025-05-29 00:58:23.233996 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-29 00:58:23.234000 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-29 00:58:23.234004 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-29 00:58:23.234008 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.234012 | orchestrator | 2025-05-29 00:58:23.234043 | orchestrator | RUNNING HANDLER [ceph-handler : set _osd_handler_called before restart] ******** 2025-05-29 00:58:23.234047 | orchestrator | Thursday 29 May 2025 00:54:30 +0000 (0:00:00.450) 0:09:15.064 ********** 2025-05-29 00:58:23.234052 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.234056 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.234060 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.234064 | orchestrator | 2025-05-29 00:58:23.234068 | orchestrator | RUNNING HANDLER [ceph-handler : unset noup flag] ******************************* 2025-05-29 00:58:23.234072 | orchestrator | Thursday 29 May 2025 00:54:30 +0000 (0:00:00.319) 0:09:15.383 ********** 2025-05-29 00:58:23.234077 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.234081 | orchestrator | 2025-05-29 00:58:23.234085 | orchestrator | RUNNING HANDLER [ceph-handler : copy osd restart script] *********************** 2025-05-29 00:58:23.234089 | orchestrator | Thursday 29 May 2025 00:54:30 +0000 (0:00:00.228) 0:09:15.612 ********** 2025-05-29 00:58:23.234093 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.234101 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.234105 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.234109 | orchestrator | 2025-05-29 00:58:23.234113 | orchestrator | RUNNING HANDLER [ceph-handler : get pool list] ********************************* 2025-05-29 00:58:23.234118 | orchestrator | Thursday 29 May 2025 00:54:31 +0000 (0:00:00.614) 0:09:16.226 ********** 2025-05-29 00:58:23.234124 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.234129 | orchestrator | 2025-05-29 00:58:23.234133 | orchestrator | RUNNING HANDLER [ceph-handler : get balancer module status] ******************** 2025-05-29 00:58:23.234137 | orchestrator | Thursday 29 May 2025 00:54:31 +0000 (0:00:00.236) 0:09:16.463 ********** 2025-05-29 00:58:23.234141 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.234145 | orchestrator | 2025-05-29 00:58:23.234149 | orchestrator | RUNNING HANDLER [ceph-handler : set_fact pools_pgautoscaler_mode] ************** 2025-05-29 00:58:23.234154 | orchestrator | Thursday 29 May 2025 00:54:32 +0000 (0:00:00.221) 0:09:16.685 ********** 2025-05-29 00:58:23.234158 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.234162 | orchestrator | 2025-05-29 00:58:23.234166 | orchestrator | RUNNING HANDLER [ceph-handler : disable balancer] ****************************** 2025-05-29 00:58:23.234170 | orchestrator | Thursday 29 May 2025 00:54:32 +0000 (0:00:00.134) 0:09:16.819 ********** 2025-05-29 00:58:23.234174 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.234178 | orchestrator | 2025-05-29 00:58:23.234183 | orchestrator | RUNNING HANDLER [ceph-handler : disable pg autoscale on pools] ***************** 2025-05-29 00:58:23.234187 | orchestrator | Thursday 29 May 2025 00:54:32 +0000 (0:00:00.251) 0:09:17.071 ********** 2025-05-29 00:58:23.234191 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.234195 | orchestrator | 2025-05-29 00:58:23.234199 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph osds daemon(s)] ******************* 2025-05-29 00:58:23.234203 | orchestrator | Thursday 29 May 2025 00:54:32 +0000 (0:00:00.226) 0:09:17.297 ********** 2025-05-29 00:58:23.234208 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-29 00:58:23.234246 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-29 00:58:23.234251 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-29 00:58:23.234256 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.234260 | orchestrator | 2025-05-29 00:58:23.234264 | orchestrator | RUNNING HANDLER [ceph-handler : set _osd_handler_called after restart] ********* 2025-05-29 00:58:23.234268 | orchestrator | Thursday 29 May 2025 00:54:33 +0000 (0:00:00.458) 0:09:17.755 ********** 2025-05-29 00:58:23.234272 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.234276 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.234280 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.234284 | orchestrator | 2025-05-29 00:58:23.234288 | orchestrator | RUNNING HANDLER [ceph-handler : re-enable pg autoscale on pools] *************** 2025-05-29 00:58:23.234292 | orchestrator | Thursday 29 May 2025 00:54:33 +0000 (0:00:00.347) 0:09:18.102 ********** 2025-05-29 00:58:23.234296 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.234301 | orchestrator | 2025-05-29 00:58:23.234305 | orchestrator | RUNNING HANDLER [ceph-handler : re-enable balancer] **************************** 2025-05-29 00:58:23.234309 | orchestrator | Thursday 29 May 2025 00:54:34 +0000 (0:00:00.763) 0:09:18.866 ********** 2025-05-29 00:58:23.234313 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.234317 | orchestrator | 2025-05-29 00:58:23.234321 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-05-29 00:58:23.234325 | orchestrator | Thursday 29 May 2025 00:54:34 +0000 (0:00:00.216) 0:09:19.082 ********** 2025-05-29 00:58:23.234329 | orchestrator | changed: [testbed-node-3] 2025-05-29 00:58:23.234333 | orchestrator | changed: [testbed-node-4] 2025-05-29 00:58:23.234337 | orchestrator | changed: [testbed-node-5] 2025-05-29 00:58:23.234342 | orchestrator | 2025-05-29 00:58:23.234346 | orchestrator | PLAY [Apply role ceph-crash] *************************************************** 2025-05-29 00:58:23.234350 | orchestrator | 2025-05-29 00:58:23.234353 | orchestrator | TASK [ceph-handler : include check_running_containers.yml] ********************* 2025-05-29 00:58:23.234361 | orchestrator | Thursday 29 May 2025 00:54:37 +0000 (0:00:02.815) 0:09:21.898 ********** 2025-05-29 00:58:23.234379 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-29 00:58:23.234384 | orchestrator | 2025-05-29 00:58:23.234388 | orchestrator | TASK [ceph-handler : check for a mon container] ******************************** 2025-05-29 00:58:23.234392 | orchestrator | Thursday 29 May 2025 00:54:38 +0000 (0:00:01.368) 0:09:23.267 ********** 2025-05-29 00:58:23.234395 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.234399 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:58:23.234403 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.234407 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:58:23.234410 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.234414 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:58:23.234418 | orchestrator | 2025-05-29 00:58:23.234422 | orchestrator | TASK [ceph-handler : check for an osd container] ******************************* 2025-05-29 00:58:23.234425 | orchestrator | Thursday 29 May 2025 00:54:39 +0000 (0:00:00.810) 0:09:24.077 ********** 2025-05-29 00:58:23.234429 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.234433 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.234437 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.234440 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:58:23.234444 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:58:23.234448 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:58:23.234451 | orchestrator | 2025-05-29 00:58:23.234455 | orchestrator | TASK [ceph-handler : check for a mds container] ******************************** 2025-05-29 00:58:23.234459 | orchestrator | Thursday 29 May 2025 00:54:40 +0000 (0:00:01.243) 0:09:25.321 ********** 2025-05-29 00:58:23.234463 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.234466 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.234470 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.234474 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:58:23.234478 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:58:23.234481 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:58:23.234485 | orchestrator | 2025-05-29 00:58:23.234489 | orchestrator | TASK [ceph-handler : check for a rgw container] ******************************** 2025-05-29 00:58:23.234493 | orchestrator | Thursday 29 May 2025 00:54:41 +0000 (0:00:01.023) 0:09:26.344 ********** 2025-05-29 00:58:23.234496 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.234500 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.234504 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.234507 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:58:23.234511 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:58:23.234515 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:58:23.234518 | orchestrator | 2025-05-29 00:58:23.234527 | orchestrator | TASK [ceph-handler : check for a mgr container] ******************************** 2025-05-29 00:58:23.234531 | orchestrator | Thursday 29 May 2025 00:54:42 +0000 (0:00:01.257) 0:09:27.602 ********** 2025-05-29 00:58:23.234534 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.234538 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:58:23.234542 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.234545 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:58:23.234549 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.234553 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:58:23.234557 | orchestrator | 2025-05-29 00:58:23.234560 | orchestrator | TASK [ceph-handler : check for a rbd mirror container] ************************* 2025-05-29 00:58:23.234564 | orchestrator | Thursday 29 May 2025 00:54:43 +0000 (0:00:01.016) 0:09:28.618 ********** 2025-05-29 00:58:23.234568 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.234572 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.234575 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.234579 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.234586 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.234590 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.234594 | orchestrator | 2025-05-29 00:58:23.234598 | orchestrator | TASK [ceph-handler : check for a nfs container] ******************************** 2025-05-29 00:58:23.234601 | orchestrator | Thursday 29 May 2025 00:54:44 +0000 (0:00:00.656) 0:09:29.275 ********** 2025-05-29 00:58:23.234605 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.234609 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.234613 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.234616 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.234620 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.234624 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.234628 | orchestrator | 2025-05-29 00:58:23.234631 | orchestrator | TASK [ceph-handler : check for a tcmu-runner container] ************************ 2025-05-29 00:58:23.234638 | orchestrator | Thursday 29 May 2025 00:54:45 +0000 (0:00:01.226) 0:09:30.501 ********** 2025-05-29 00:58:23.234642 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.234646 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.234649 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.234653 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.234657 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.234660 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.234664 | orchestrator | 2025-05-29 00:58:23.234668 | orchestrator | TASK [ceph-handler : check for a rbd-target-api container] ********************* 2025-05-29 00:58:23.234672 | orchestrator | Thursday 29 May 2025 00:54:46 +0000 (0:00:00.629) 0:09:31.131 ********** 2025-05-29 00:58:23.234675 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.234679 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.234683 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.234687 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.234690 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.234694 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.234698 | orchestrator | 2025-05-29 00:58:23.234702 | orchestrator | TASK [ceph-handler : check for a rbd-target-gw container] ********************** 2025-05-29 00:58:23.234705 | orchestrator | Thursday 29 May 2025 00:54:47 +0000 (0:00:01.013) 0:09:32.144 ********** 2025-05-29 00:58:23.234709 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.234713 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.234716 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.234720 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.234724 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.234728 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.234731 | orchestrator | 2025-05-29 00:58:23.234735 | orchestrator | TASK [ceph-handler : check for a ceph-crash container] ************************* 2025-05-29 00:58:23.234739 | orchestrator | Thursday 29 May 2025 00:54:48 +0000 (0:00:00.646) 0:09:32.791 ********** 2025-05-29 00:58:23.234743 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:58:23.234747 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:58:23.234762 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:58:23.234766 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:58:23.234770 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:58:23.234774 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:58:23.234777 | orchestrator | 2025-05-29 00:58:23.234781 | orchestrator | TASK [ceph-handler : include check_socket_non_container.yml] ******************* 2025-05-29 00:58:23.234785 | orchestrator | Thursday 29 May 2025 00:54:49 +0000 (0:00:01.258) 0:09:34.049 ********** 2025-05-29 00:58:23.234789 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.234792 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.234796 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.234800 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.234804 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.234807 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.234811 | orchestrator | 2025-05-29 00:58:23.234815 | orchestrator | TASK [ceph-handler : set_fact handler_mon_status] ****************************** 2025-05-29 00:58:23.234822 | orchestrator | Thursday 29 May 2025 00:54:50 +0000 (0:00:00.632) 0:09:34.681 ********** 2025-05-29 00:58:23.234825 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:58:23.234829 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:58:23.234833 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:58:23.234837 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.234840 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.234844 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.234848 | orchestrator | 2025-05-29 00:58:23.234852 | orchestrator | TASK [ceph-handler : set_fact handler_osd_status] ****************************** 2025-05-29 00:58:23.234855 | orchestrator | Thursday 29 May 2025 00:54:50 +0000 (0:00:00.854) 0:09:35.536 ********** 2025-05-29 00:58:23.234859 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.234863 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.234867 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.234870 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:58:23.234874 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:58:23.234878 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:58:23.234881 | orchestrator | 2025-05-29 00:58:23.234885 | orchestrator | TASK [ceph-handler : set_fact handler_mds_status] ****************************** 2025-05-29 00:58:23.234889 | orchestrator | Thursday 29 May 2025 00:54:51 +0000 (0:00:00.703) 0:09:36.240 ********** 2025-05-29 00:58:23.234893 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.234896 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.234900 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.234907 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:58:23.234910 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:58:23.234914 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:58:23.234918 | orchestrator | 2025-05-29 00:58:23.234922 | orchestrator | TASK [ceph-handler : set_fact handler_rgw_status] ****************************** 2025-05-29 00:58:23.234925 | orchestrator | Thursday 29 May 2025 00:54:52 +0000 (0:00:00.987) 0:09:37.228 ********** 2025-05-29 00:58:23.234929 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.234933 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.234937 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.234940 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:58:23.234944 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:58:23.234948 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:58:23.234952 | orchestrator | 2025-05-29 00:58:23.234955 | orchestrator | TASK [ceph-handler : set_fact handler_nfs_status] ****************************** 2025-05-29 00:58:23.234959 | orchestrator | Thursday 29 May 2025 00:54:53 +0000 (0:00:00.614) 0:09:37.843 ********** 2025-05-29 00:58:23.234963 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.234967 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.234970 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.234974 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.234978 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.234982 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.234985 | orchestrator | 2025-05-29 00:58:23.234989 | orchestrator | TASK [ceph-handler : set_fact handler_rbd_status] ****************************** 2025-05-29 00:58:23.234993 | orchestrator | Thursday 29 May 2025 00:54:54 +0000 (0:00:00.857) 0:09:38.701 ********** 2025-05-29 00:58:23.234997 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.235000 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.235004 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.235008 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.235011 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.235015 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.235019 | orchestrator | 2025-05-29 00:58:23.235023 | orchestrator | TASK [ceph-handler : set_fact handler_mgr_status] ****************************** 2025-05-29 00:58:23.235026 | orchestrator | Thursday 29 May 2025 00:54:54 +0000 (0:00:00.638) 0:09:39.339 ********** 2025-05-29 00:58:23.235030 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:58:23.235034 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:58:23.235041 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:58:23.235044 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.235048 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.235052 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.235056 | orchestrator | 2025-05-29 00:58:23.235059 | orchestrator | TASK [ceph-handler : set_fact handler_crash_status] **************************** 2025-05-29 00:58:23.235063 | orchestrator | Thursday 29 May 2025 00:54:55 +0000 (0:00:00.873) 0:09:40.212 ********** 2025-05-29 00:58:23.235067 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:58:23.235071 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:58:23.235074 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:58:23.235078 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:58:23.235082 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:58:23.235085 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:58:23.235089 | orchestrator | 2025-05-29 00:58:23.235093 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-05-29 00:58:23.235097 | orchestrator | Thursday 29 May 2025 00:54:56 +0000 (0:00:00.642) 0:09:40.854 ********** 2025-05-29 00:58:23.235100 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.235104 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.235108 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.235112 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.235115 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.235119 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.235123 | orchestrator | 2025-05-29 00:58:23.235127 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-05-29 00:58:23.235130 | orchestrator | Thursday 29 May 2025 00:54:57 +0000 (0:00:00.879) 0:09:41.734 ********** 2025-05-29 00:58:23.235145 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.235150 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.235154 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.235157 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.235161 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.235165 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.235169 | orchestrator | 2025-05-29 00:58:23.235172 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-05-29 00:58:23.235176 | orchestrator | Thursday 29 May 2025 00:54:57 +0000 (0:00:00.676) 0:09:42.411 ********** 2025-05-29 00:58:23.235180 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.235184 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.235188 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.235191 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.235195 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.235199 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.235202 | orchestrator | 2025-05-29 00:58:23.235206 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-05-29 00:58:23.235210 | orchestrator | Thursday 29 May 2025 00:54:58 +0000 (0:00:00.882) 0:09:43.293 ********** 2025-05-29 00:58:23.235226 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.235230 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.235234 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.235237 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.235241 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.235245 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.235248 | orchestrator | 2025-05-29 00:58:23.235252 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-05-29 00:58:23.235256 | orchestrator | Thursday 29 May 2025 00:54:59 +0000 (0:00:00.648) 0:09:43.941 ********** 2025-05-29 00:58:23.235260 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.235263 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.235267 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.235271 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.235274 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.235281 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.235285 | orchestrator | 2025-05-29 00:58:23.235289 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-05-29 00:58:23.235293 | orchestrator | Thursday 29 May 2025 00:55:00 +0000 (0:00:00.926) 0:09:44.868 ********** 2025-05-29 00:58:23.235299 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.235303 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.235306 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.235310 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.235314 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.235317 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.235321 | orchestrator | 2025-05-29 00:58:23.235325 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-05-29 00:58:23.235329 | orchestrator | Thursday 29 May 2025 00:55:00 +0000 (0:00:00.643) 0:09:45.511 ********** 2025-05-29 00:58:23.235332 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.235336 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.235340 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.235344 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.235347 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.235351 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.235355 | orchestrator | 2025-05-29 00:58:23.235358 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-05-29 00:58:23.235362 | orchestrator | Thursday 29 May 2025 00:55:01 +0000 (0:00:00.878) 0:09:46.390 ********** 2025-05-29 00:58:23.235366 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.235370 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.235373 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.235377 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.235381 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.235384 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.235388 | orchestrator | 2025-05-29 00:58:23.235392 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-05-29 00:58:23.235396 | orchestrator | Thursday 29 May 2025 00:55:02 +0000 (0:00:00.639) 0:09:47.029 ********** 2025-05-29 00:58:23.235399 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.235403 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.235407 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.235411 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.235414 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.235418 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.235422 | orchestrator | 2025-05-29 00:58:23.235426 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-05-29 00:58:23.235430 | orchestrator | Thursday 29 May 2025 00:55:03 +0000 (0:00:00.870) 0:09:47.900 ********** 2025-05-29 00:58:23.235433 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.235437 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.235441 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.235444 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.235448 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.235452 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.235455 | orchestrator | 2025-05-29 00:58:23.235459 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-05-29 00:58:23.235463 | orchestrator | Thursday 29 May 2025 00:55:03 +0000 (0:00:00.685) 0:09:48.586 ********** 2025-05-29 00:58:23.235467 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.235470 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.235474 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.235478 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.235481 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.235485 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.235489 | orchestrator | 2025-05-29 00:58:23.235496 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-05-29 00:58:23.235499 | orchestrator | Thursday 29 May 2025 00:55:04 +0000 (0:00:00.945) 0:09:49.531 ********** 2025-05-29 00:58:23.235503 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.235507 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.235511 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.235526 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.235530 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.235534 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.235538 | orchestrator | 2025-05-29 00:58:23.235541 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-05-29 00:58:23.235545 | orchestrator | Thursday 29 May 2025 00:55:05 +0000 (0:00:00.721) 0:09:50.253 ********** 2025-05-29 00:58:23.235549 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-05-29 00:58:23.235552 | orchestrator | skipping: [testbed-node-0] => (item=)  2025-05-29 00:58:23.235556 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.235560 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-05-29 00:58:23.235563 | orchestrator | skipping: [testbed-node-1] => (item=)  2025-05-29 00:58:23.235567 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.235571 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-05-29 00:58:23.235574 | orchestrator | skipping: [testbed-node-2] => (item=)  2025-05-29 00:58:23.235578 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.235582 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-05-29 00:58:23.235585 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-05-29 00:58:23.235589 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-05-29 00:58:23.235593 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-05-29 00:58:23.235596 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.235600 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.235603 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-05-29 00:58:23.235607 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-05-29 00:58:23.235611 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.235614 | orchestrator | 2025-05-29 00:58:23.235618 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-05-29 00:58:23.235622 | orchestrator | Thursday 29 May 2025 00:55:06 +0000 (0:00:00.950) 0:09:51.203 ********** 2025-05-29 00:58:23.235626 | orchestrator | skipping: [testbed-node-0] => (item=osd memory target)  2025-05-29 00:58:23.235629 | orchestrator | skipping: [testbed-node-0] => (item=osd_memory_target)  2025-05-29 00:58:23.235633 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.235637 | orchestrator | skipping: [testbed-node-1] => (item=osd memory target)  2025-05-29 00:58:23.235643 | orchestrator | skipping: [testbed-node-1] => (item=osd_memory_target)  2025-05-29 00:58:23.235647 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.235651 | orchestrator | skipping: [testbed-node-2] => (item=osd memory target)  2025-05-29 00:58:23.235654 | orchestrator | skipping: [testbed-node-2] => (item=osd_memory_target)  2025-05-29 00:58:23.235658 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.235662 | orchestrator | skipping: [testbed-node-3] => (item=osd memory target)  2025-05-29 00:58:23.235665 | orchestrator | skipping: [testbed-node-3] => (item=osd_memory_target)  2025-05-29 00:58:23.235669 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.235673 | orchestrator | skipping: [testbed-node-4] => (item=osd memory target)  2025-05-29 00:58:23.235676 | orchestrator | skipping: [testbed-node-4] => (item=osd_memory_target)  2025-05-29 00:58:23.235680 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.235684 | orchestrator | skipping: [testbed-node-5] => (item=osd memory target)  2025-05-29 00:58:23.235687 | orchestrator | skipping: [testbed-node-5] => (item=osd_memory_target)  2025-05-29 00:58:23.235691 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.235695 | orchestrator | 2025-05-29 00:58:23.235698 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-05-29 00:58:23.235705 | orchestrator | Thursday 29 May 2025 00:55:07 +0000 (0:00:00.936) 0:09:52.139 ********** 2025-05-29 00:58:23.235709 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.235712 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.235716 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.235720 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.235723 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.235727 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.235731 | orchestrator | 2025-05-29 00:58:23.235735 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-05-29 00:58:23.235738 | orchestrator | Thursday 29 May 2025 00:55:08 +0000 (0:00:00.996) 0:09:53.136 ********** 2025-05-29 00:58:23.235742 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.235746 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.235749 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.235753 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.235757 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.235761 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.235764 | orchestrator | 2025-05-29 00:58:23.235768 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-05-29 00:58:23.235772 | orchestrator | Thursday 29 May 2025 00:55:09 +0000 (0:00:00.808) 0:09:53.944 ********** 2025-05-29 00:58:23.235776 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.235779 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.235783 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.235787 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.235790 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.235794 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.235798 | orchestrator | 2025-05-29 00:58:23.235801 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-05-29 00:58:23.235805 | orchestrator | Thursday 29 May 2025 00:55:10 +0000 (0:00:01.104) 0:09:55.049 ********** 2025-05-29 00:58:23.235809 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.235813 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.235816 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.235820 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.235824 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.235827 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.235831 | orchestrator | 2025-05-29 00:58:23.235835 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-05-29 00:58:23.235839 | orchestrator | Thursday 29 May 2025 00:55:10 +0000 (0:00:00.560) 0:09:55.609 ********** 2025-05-29 00:58:23.235854 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.235858 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.235862 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.235866 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.235869 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.235873 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.235877 | orchestrator | 2025-05-29 00:58:23.235880 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-05-29 00:58:23.235884 | orchestrator | Thursday 29 May 2025 00:55:11 +0000 (0:00:00.755) 0:09:56.364 ********** 2025-05-29 00:58:23.235888 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.235892 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.235895 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.235899 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.235903 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.235906 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.235910 | orchestrator | 2025-05-29 00:58:23.235914 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-05-29 00:58:23.235917 | orchestrator | Thursday 29 May 2025 00:55:12 +0000 (0:00:00.648) 0:09:57.013 ********** 2025-05-29 00:58:23.235925 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-29 00:58:23.235929 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-29 00:58:23.235933 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-29 00:58:23.235936 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.235940 | orchestrator | 2025-05-29 00:58:23.235944 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-05-29 00:58:23.235947 | orchestrator | Thursday 29 May 2025 00:55:12 +0000 (0:00:00.395) 0:09:57.408 ********** 2025-05-29 00:58:23.235951 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-29 00:58:23.235955 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-29 00:58:23.235959 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-29 00:58:23.235962 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.235966 | orchestrator | 2025-05-29 00:58:23.235970 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-05-29 00:58:23.235976 | orchestrator | Thursday 29 May 2025 00:55:13 +0000 (0:00:00.380) 0:09:57.789 ********** 2025-05-29 00:58:23.235979 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-29 00:58:23.235983 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-29 00:58:23.235987 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-29 00:58:23.235991 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.235994 | orchestrator | 2025-05-29 00:58:23.235998 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-29 00:58:23.236002 | orchestrator | Thursday 29 May 2025 00:55:13 +0000 (0:00:00.532) 0:09:58.322 ********** 2025-05-29 00:58:23.236006 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.236009 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.236014 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.236020 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.236026 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.236029 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.236033 | orchestrator | 2025-05-29 00:58:23.236037 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-05-29 00:58:23.236040 | orchestrator | Thursday 29 May 2025 00:55:14 +0000 (0:00:00.843) 0:09:59.166 ********** 2025-05-29 00:58:23.236044 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-05-29 00:58:23.236048 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.236052 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-05-29 00:58:23.236055 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.236059 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-05-29 00:58:23.236063 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.236066 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-05-29 00:58:23.236070 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.236074 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-05-29 00:58:23.236077 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.236081 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-05-29 00:58:23.236085 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.236089 | orchestrator | 2025-05-29 00:58:23.236092 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-05-29 00:58:23.236096 | orchestrator | Thursday 29 May 2025 00:55:15 +0000 (0:00:00.958) 0:10:00.125 ********** 2025-05-29 00:58:23.236100 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.236104 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.236107 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.236111 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.236115 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.236118 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.236122 | orchestrator | 2025-05-29 00:58:23.236126 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-29 00:58:23.236133 | orchestrator | Thursday 29 May 2025 00:55:16 +0000 (0:00:00.890) 0:10:01.015 ********** 2025-05-29 00:58:23.236137 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.236141 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.236144 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.236148 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.236152 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.236155 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.236159 | orchestrator | 2025-05-29 00:58:23.236163 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-05-29 00:58:23.236167 | orchestrator | Thursday 29 May 2025 00:55:17 +0000 (0:00:00.655) 0:10:01.671 ********** 2025-05-29 00:58:23.236170 | orchestrator | skipping: [testbed-node-0] => (item=0)  2025-05-29 00:58:23.236174 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.236178 | orchestrator | skipping: [testbed-node-1] => (item=0)  2025-05-29 00:58:23.236181 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.236185 | orchestrator | skipping: [testbed-node-2] => (item=0)  2025-05-29 00:58:23.236189 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.236204 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-05-29 00:58:23.236209 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.236226 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-05-29 00:58:23.236230 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.236234 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-05-29 00:58:23.236237 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.236241 | orchestrator | 2025-05-29 00:58:23.236245 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-05-29 00:58:23.236249 | orchestrator | Thursday 29 May 2025 00:55:17 +0000 (0:00:00.935) 0:10:02.606 ********** 2025-05-29 00:58:23.236252 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.236256 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.236260 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.236263 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-05-29 00:58:23.236267 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.236271 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-05-29 00:58:23.236275 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.236278 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-05-29 00:58:23.236282 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.236286 | orchestrator | 2025-05-29 00:58:23.236290 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-05-29 00:58:23.236293 | orchestrator | Thursday 29 May 2025 00:55:18 +0000 (0:00:00.585) 0:10:03.192 ********** 2025-05-29 00:58:23.236297 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-3)  2025-05-29 00:58:23.236301 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-4)  2025-05-29 00:58:23.236305 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-5)  2025-05-29 00:58:23.236308 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.236312 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-3)  2025-05-29 00:58:23.236318 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-4)  2025-05-29 00:58:23.236322 | orchestrator | skipping: [testbed-node-1] => (item=testbed-node-5)  2025-05-29 00:58:23.236326 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.236329 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-3)  2025-05-29 00:58:23.236333 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-4)  2025-05-29 00:58:23.236337 | orchestrator | skipping: [testbed-node-2] => (item=testbed-node-5)  2025-05-29 00:58:23.236344 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.236347 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-29 00:58:23.236351 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-29 00:58:23.236355 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-29 00:58:23.236358 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-05-29 00:58:23.236362 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-05-29 00:58:23.236366 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-05-29 00:58:23.236370 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.236373 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.236377 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-05-29 00:58:23.236381 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-05-29 00:58:23.236385 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-05-29 00:58:23.236388 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.236392 | orchestrator | 2025-05-29 00:58:23.236396 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-05-29 00:58:23.236400 | orchestrator | Thursday 29 May 2025 00:55:19 +0000 (0:00:01.261) 0:10:04.453 ********** 2025-05-29 00:58:23.236403 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.236407 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.236411 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.236414 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.236418 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.236422 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.236425 | orchestrator | 2025-05-29 00:58:23.236429 | orchestrator | TASK [ceph-rgw : create rgw keyrings] ****************************************** 2025-05-29 00:58:23.236433 | orchestrator | Thursday 29 May 2025 00:55:21 +0000 (0:00:01.310) 0:10:05.764 ********** 2025-05-29 00:58:23.236437 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.236440 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.236444 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.236448 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-29 00:58:23.236451 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.236455 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-05-29 00:58:23.236459 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.236462 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-05-29 00:58:23.236466 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.236470 | orchestrator | 2025-05-29 00:58:23.236473 | orchestrator | TASK [ceph-rgw : include_tasks multisite] ************************************** 2025-05-29 00:58:23.236477 | orchestrator | Thursday 29 May 2025 00:55:22 +0000 (0:00:01.626) 0:10:07.391 ********** 2025-05-29 00:58:23.236481 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.236485 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.236488 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.236492 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.236496 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.236499 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.236503 | orchestrator | 2025-05-29 00:58:23.236507 | orchestrator | TASK [ceph-handler : set_fact multisite_called_from_handler_role] ************** 2025-05-29 00:58:23.236513 | orchestrator | Thursday 29 May 2025 00:55:24 +0000 (0:00:01.291) 0:10:08.683 ********** 2025-05-29 00:58:23.236516 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:23.236520 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:23.236524 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:23.236528 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.236531 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.236535 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.236540 | orchestrator | 2025-05-29 00:58:23.236547 | orchestrator | TASK [ceph-crash : create client.crash keyring] ******************************** 2025-05-29 00:58:23.236564 | orchestrator | Thursday 29 May 2025 00:55:25 +0000 (0:00:01.337) 0:10:10.020 ********** 2025-05-29 00:58:23.236572 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:58:23.236578 | orchestrator | 2025-05-29 00:58:23.236584 | orchestrator | TASK [ceph-crash : get keys from monitors] ************************************* 2025-05-29 00:58:23.236590 | orchestrator | Thursday 29 May 2025 00:55:29 +0000 (0:00:03.727) 0:10:13.747 ********** 2025-05-29 00:58:23.236596 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:58:23.236602 | orchestrator | 2025-05-29 00:58:23.236608 | orchestrator | TASK [ceph-crash : copy ceph key(s) if needed] ********************************* 2025-05-29 00:58:23.236614 | orchestrator | Thursday 29 May 2025 00:55:30 +0000 (0:00:01.697) 0:10:15.444 ********** 2025-05-29 00:58:23.236620 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:58:23.236626 | orchestrator | changed: [testbed-node-1] 2025-05-29 00:58:23.236632 | orchestrator | changed: [testbed-node-2] 2025-05-29 00:58:23.236639 | orchestrator | changed: [testbed-node-3] 2025-05-29 00:58:23.236646 | orchestrator | changed: [testbed-node-4] 2025-05-29 00:58:23.236650 | orchestrator | changed: [testbed-node-5] 2025-05-29 00:58:23.236654 | orchestrator | 2025-05-29 00:58:23.236658 | orchestrator | TASK [ceph-crash : create /var/lib/ceph/crash/posted] ************************** 2025-05-29 00:58:23.236661 | orchestrator | Thursday 29 May 2025 00:55:32 +0000 (0:00:01.657) 0:10:17.102 ********** 2025-05-29 00:58:23.236665 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:58:23.236669 | orchestrator | changed: [testbed-node-1] 2025-05-29 00:58:23.236673 | orchestrator | changed: [testbed-node-2] 2025-05-29 00:58:23.236676 | orchestrator | changed: [testbed-node-3] 2025-05-29 00:58:23.236680 | orchestrator | changed: [testbed-node-4] 2025-05-29 00:58:23.236684 | orchestrator | changed: [testbed-node-5] 2025-05-29 00:58:23.236688 | orchestrator | 2025-05-29 00:58:23.236691 | orchestrator | TASK [ceph-crash : include_tasks systemd.yml] ********************************** 2025-05-29 00:58:23.236698 | orchestrator | Thursday 29 May 2025 00:55:33 +0000 (0:00:01.251) 0:10:18.353 ********** 2025-05-29 00:58:23.236702 | orchestrator | included: /ansible/roles/ceph-crash/tasks/systemd.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-29 00:58:23.236707 | orchestrator | 2025-05-29 00:58:23.236711 | orchestrator | TASK [ceph-crash : generate systemd unit file for ceph-crash container] ******** 2025-05-29 00:58:23.236715 | orchestrator | Thursday 29 May 2025 00:55:34 +0000 (0:00:01.278) 0:10:19.632 ********** 2025-05-29 00:58:23.236718 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:58:23.236722 | orchestrator | changed: [testbed-node-1] 2025-05-29 00:58:23.236726 | orchestrator | changed: [testbed-node-2] 2025-05-29 00:58:23.236730 | orchestrator | changed: [testbed-node-4] 2025-05-29 00:58:23.236734 | orchestrator | changed: [testbed-node-3] 2025-05-29 00:58:23.236737 | orchestrator | changed: [testbed-node-5] 2025-05-29 00:58:23.236741 | orchestrator | 2025-05-29 00:58:23.236745 | orchestrator | TASK [ceph-crash : start the ceph-crash service] ******************************* 2025-05-29 00:58:23.236749 | orchestrator | Thursday 29 May 2025 00:55:36 +0000 (0:00:01.732) 0:10:21.365 ********** 2025-05-29 00:58:23.236753 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:58:23.236756 | orchestrator | changed: [testbed-node-1] 2025-05-29 00:58:23.236760 | orchestrator | changed: [testbed-node-2] 2025-05-29 00:58:23.236764 | orchestrator | changed: [testbed-node-3] 2025-05-29 00:58:23.236768 | orchestrator | changed: [testbed-node-5] 2025-05-29 00:58:23.236771 | orchestrator | changed: [testbed-node-4] 2025-05-29 00:58:23.236775 | orchestrator | 2025-05-29 00:58:23.236779 | orchestrator | RUNNING HANDLER [ceph-handler : ceph crash handler] **************************** 2025-05-29 00:58:23.236783 | orchestrator | Thursday 29 May 2025 00:55:41 +0000 (0:00:04.777) 0:10:26.142 ********** 2025-05-29 00:58:23.236787 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_crash.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-29 00:58:23.236791 | orchestrator | 2025-05-29 00:58:23.236794 | orchestrator | RUNNING HANDLER [ceph-handler : set _crash_handler_called before restart] ****** 2025-05-29 00:58:23.236802 | orchestrator | Thursday 29 May 2025 00:55:42 +0000 (0:00:01.356) 0:10:27.499 ********** 2025-05-29 00:58:23.236806 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:58:23.236810 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:58:23.236814 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:58:23.236817 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:58:23.236821 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:58:23.236825 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:58:23.236829 | orchestrator | 2025-05-29 00:58:23.236833 | orchestrator | RUNNING HANDLER [ceph-handler : restart the ceph-crash service] **************** 2025-05-29 00:58:23.236837 | orchestrator | Thursday 29 May 2025 00:55:43 +0000 (0:00:00.673) 0:10:28.173 ********** 2025-05-29 00:58:23.236840 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:58:23.236844 | orchestrator | changed: [testbed-node-1] 2025-05-29 00:58:23.236848 | orchestrator | changed: [testbed-node-3] 2025-05-29 00:58:23.236852 | orchestrator | changed: [testbed-node-2] 2025-05-29 00:58:23.236855 | orchestrator | changed: [testbed-node-4] 2025-05-29 00:58:23.236859 | orchestrator | changed: [testbed-node-5] 2025-05-29 00:58:23.236863 | orchestrator | 2025-05-29 00:58:23.236867 | orchestrator | RUNNING HANDLER [ceph-handler : set _crash_handler_called after restart] ******* 2025-05-29 00:58:23.236871 | orchestrator | Thursday 29 May 2025 00:55:46 +0000 (0:00:02.462) 0:10:30.635 ********** 2025-05-29 00:58:23.236874 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:58:23.236878 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:58:23.236882 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:58:23.236886 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:58:23.236889 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:58:23.236893 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:58:23.236897 | orchestrator | 2025-05-29 00:58:23.236901 | orchestrator | PLAY [Apply role ceph-mds] ***************************************************** 2025-05-29 00:58:23.236905 | orchestrator | 2025-05-29 00:58:23.236908 | orchestrator | TASK [ceph-handler : include check_running_containers.yml] ********************* 2025-05-29 00:58:23.236916 | orchestrator | Thursday 29 May 2025 00:55:48 +0000 (0:00:02.639) 0:10:33.275 ********** 2025-05-29 00:58:23.236920 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-29 00:58:23.236924 | orchestrator | 2025-05-29 00:58:23.236928 | orchestrator | TASK [ceph-handler : check for a mon container] ******************************** 2025-05-29 00:58:23.236931 | orchestrator | Thursday 29 May 2025 00:55:49 +0000 (0:00:00.768) 0:10:34.044 ********** 2025-05-29 00:58:23.236935 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.236939 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.236943 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.236946 | orchestrator | 2025-05-29 00:58:23.236950 | orchestrator | TASK [ceph-handler : check for an osd container] ******************************* 2025-05-29 00:58:23.236954 | orchestrator | Thursday 29 May 2025 00:55:49 +0000 (0:00:00.346) 0:10:34.390 ********** 2025-05-29 00:58:23.236957 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:58:23.236961 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:58:23.236965 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:58:23.236968 | orchestrator | 2025-05-29 00:58:23.236972 | orchestrator | TASK [ceph-handler : check for a mds container] ******************************** 2025-05-29 00:58:23.236976 | orchestrator | Thursday 29 May 2025 00:55:50 +0000 (0:00:00.715) 0:10:35.105 ********** 2025-05-29 00:58:23.236980 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:58:23.236983 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:58:23.236987 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:58:23.236991 | orchestrator | 2025-05-29 00:58:23.236995 | orchestrator | TASK [ceph-handler : check for a rgw container] ******************************** 2025-05-29 00:58:23.236998 | orchestrator | Thursday 29 May 2025 00:55:51 +0000 (0:00:00.731) 0:10:35.836 ********** 2025-05-29 00:58:23.237002 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:58:23.237006 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:58:23.237009 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:58:23.237013 | orchestrator | 2025-05-29 00:58:23.237020 | orchestrator | TASK [ceph-handler : check for a mgr container] ******************************** 2025-05-29 00:58:23.237023 | orchestrator | Thursday 29 May 2025 00:55:52 +0000 (0:00:01.113) 0:10:36.950 ********** 2025-05-29 00:58:23.237027 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.237031 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.237035 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.237038 | orchestrator | 2025-05-29 00:58:23.237044 | orchestrator | TASK [ceph-handler : check for a rbd mirror container] ************************* 2025-05-29 00:58:23.237048 | orchestrator | Thursday 29 May 2025 00:55:52 +0000 (0:00:00.320) 0:10:37.271 ********** 2025-05-29 00:58:23.237052 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.237056 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.237059 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.237063 | orchestrator | 2025-05-29 00:58:23.237067 | orchestrator | TASK [ceph-handler : check for a nfs container] ******************************** 2025-05-29 00:58:23.237070 | orchestrator | Thursday 29 May 2025 00:55:52 +0000 (0:00:00.316) 0:10:37.588 ********** 2025-05-29 00:58:23.237074 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.237078 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.237082 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.237085 | orchestrator | 2025-05-29 00:58:23.237089 | orchestrator | TASK [ceph-handler : check for a tcmu-runner container] ************************ 2025-05-29 00:58:23.237093 | orchestrator | Thursday 29 May 2025 00:55:53 +0000 (0:00:00.302) 0:10:37.890 ********** 2025-05-29 00:58:23.237096 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.237100 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.237104 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.237107 | orchestrator | 2025-05-29 00:58:23.237111 | orchestrator | TASK [ceph-handler : check for a rbd-target-api container] ********************* 2025-05-29 00:58:23.237115 | orchestrator | Thursday 29 May 2025 00:55:53 +0000 (0:00:00.629) 0:10:38.520 ********** 2025-05-29 00:58:23.237119 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.237122 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.237126 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.237130 | orchestrator | 2025-05-29 00:58:23.237133 | orchestrator | TASK [ceph-handler : check for a rbd-target-gw container] ********************** 2025-05-29 00:58:23.237137 | orchestrator | Thursday 29 May 2025 00:55:54 +0000 (0:00:00.365) 0:10:38.885 ********** 2025-05-29 00:58:23.237141 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.237145 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.237148 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.237152 | orchestrator | 2025-05-29 00:58:23.237156 | orchestrator | TASK [ceph-handler : check for a ceph-crash container] ************************* 2025-05-29 00:58:23.237159 | orchestrator | Thursday 29 May 2025 00:55:54 +0000 (0:00:00.343) 0:10:39.229 ********** 2025-05-29 00:58:23.237163 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:58:23.237167 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:58:23.237170 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:58:23.237174 | orchestrator | 2025-05-29 00:58:23.237178 | orchestrator | TASK [ceph-handler : include check_socket_non_container.yml] ******************* 2025-05-29 00:58:23.237182 | orchestrator | Thursday 29 May 2025 00:55:55 +0000 (0:00:00.766) 0:10:39.996 ********** 2025-05-29 00:58:23.237186 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.237189 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.237193 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.237197 | orchestrator | 2025-05-29 00:58:23.237200 | orchestrator | TASK [ceph-handler : set_fact handler_mon_status] ****************************** 2025-05-29 00:58:23.237204 | orchestrator | Thursday 29 May 2025 00:55:55 +0000 (0:00:00.607) 0:10:40.603 ********** 2025-05-29 00:58:23.237208 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.237223 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.237227 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.237231 | orchestrator | 2025-05-29 00:58:23.237235 | orchestrator | TASK [ceph-handler : set_fact handler_osd_status] ****************************** 2025-05-29 00:58:23.237242 | orchestrator | Thursday 29 May 2025 00:55:56 +0000 (0:00:00.382) 0:10:40.985 ********** 2025-05-29 00:58:23.237246 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:58:23.237249 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:58:23.237253 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:58:23.237257 | orchestrator | 2025-05-29 00:58:23.237260 | orchestrator | TASK [ceph-handler : set_fact handler_mds_status] ****************************** 2025-05-29 00:58:23.237267 | orchestrator | Thursday 29 May 2025 00:55:56 +0000 (0:00:00.359) 0:10:41.345 ********** 2025-05-29 00:58:23.237271 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:58:23.237274 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:58:23.237278 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:58:23.237282 | orchestrator | 2025-05-29 00:58:23.237286 | orchestrator | TASK [ceph-handler : set_fact handler_rgw_status] ****************************** 2025-05-29 00:58:23.237289 | orchestrator | Thursday 29 May 2025 00:55:57 +0000 (0:00:00.331) 0:10:41.677 ********** 2025-05-29 00:58:23.237293 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:58:23.237297 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:58:23.237300 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:58:23.237304 | orchestrator | 2025-05-29 00:58:23.237308 | orchestrator | TASK [ceph-handler : set_fact handler_nfs_status] ****************************** 2025-05-29 00:58:23.237311 | orchestrator | Thursday 29 May 2025 00:55:57 +0000 (0:00:00.630) 0:10:42.307 ********** 2025-05-29 00:58:23.237315 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.237319 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.237322 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.237326 | orchestrator | 2025-05-29 00:58:23.237330 | orchestrator | TASK [ceph-handler : set_fact handler_rbd_status] ****************************** 2025-05-29 00:58:23.237334 | orchestrator | Thursday 29 May 2025 00:55:58 +0000 (0:00:00.342) 0:10:42.650 ********** 2025-05-29 00:58:23.237337 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.237341 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.237345 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.237348 | orchestrator | 2025-05-29 00:58:23.237352 | orchestrator | TASK [ceph-handler : set_fact handler_mgr_status] ****************************** 2025-05-29 00:58:23.237356 | orchestrator | Thursday 29 May 2025 00:55:58 +0000 (0:00:00.304) 0:10:42.955 ********** 2025-05-29 00:58:23.237360 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.237363 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.237367 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.237371 | orchestrator | 2025-05-29 00:58:23.237374 | orchestrator | TASK [ceph-handler : set_fact handler_crash_status] **************************** 2025-05-29 00:58:23.237378 | orchestrator | Thursday 29 May 2025 00:55:58 +0000 (0:00:00.317) 0:10:43.273 ********** 2025-05-29 00:58:23.237382 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:58:23.237385 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:58:23.237389 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:58:23.237393 | orchestrator | 2025-05-29 00:58:23.237399 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-05-29 00:58:23.237403 | orchestrator | Thursday 29 May 2025 00:55:59 +0000 (0:00:00.674) 0:10:43.947 ********** 2025-05-29 00:58:23.237407 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.237411 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.237414 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.237418 | orchestrator | 2025-05-29 00:58:23.237422 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-05-29 00:58:23.237425 | orchestrator | Thursday 29 May 2025 00:55:59 +0000 (0:00:00.400) 0:10:44.348 ********** 2025-05-29 00:58:23.237429 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.237433 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.237436 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.237441 | orchestrator | 2025-05-29 00:58:23.237444 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-05-29 00:58:23.237448 | orchestrator | Thursday 29 May 2025 00:56:00 +0000 (0:00:00.379) 0:10:44.727 ********** 2025-05-29 00:58:23.237455 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.237458 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.237462 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.237466 | orchestrator | 2025-05-29 00:58:23.237469 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-05-29 00:58:23.237473 | orchestrator | Thursday 29 May 2025 00:56:00 +0000 (0:00:00.415) 0:10:45.143 ********** 2025-05-29 00:58:23.237477 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.237480 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.237484 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.237488 | orchestrator | 2025-05-29 00:58:23.237491 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-05-29 00:58:23.237495 | orchestrator | Thursday 29 May 2025 00:56:01 +0000 (0:00:00.708) 0:10:45.852 ********** 2025-05-29 00:58:23.237499 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.237502 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.237506 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.237510 | orchestrator | 2025-05-29 00:58:23.237513 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-05-29 00:58:23.237517 | orchestrator | Thursday 29 May 2025 00:56:01 +0000 (0:00:00.392) 0:10:46.244 ********** 2025-05-29 00:58:23.237521 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.237525 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.237528 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.237532 | orchestrator | 2025-05-29 00:58:23.237536 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-05-29 00:58:23.237539 | orchestrator | Thursday 29 May 2025 00:56:01 +0000 (0:00:00.378) 0:10:46.622 ********** 2025-05-29 00:58:23.237543 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.237547 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.237550 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.237554 | orchestrator | 2025-05-29 00:58:23.237558 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-05-29 00:58:23.237562 | orchestrator | Thursday 29 May 2025 00:56:02 +0000 (0:00:00.332) 0:10:46.955 ********** 2025-05-29 00:58:23.237565 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.237569 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.237573 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.237576 | orchestrator | 2025-05-29 00:58:23.237580 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-05-29 00:58:23.237584 | orchestrator | Thursday 29 May 2025 00:56:02 +0000 (0:00:00.629) 0:10:47.585 ********** 2025-05-29 00:58:23.237588 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.237591 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.237595 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.237599 | orchestrator | 2025-05-29 00:58:23.237604 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-05-29 00:58:23.237608 | orchestrator | Thursday 29 May 2025 00:56:03 +0000 (0:00:00.353) 0:10:47.938 ********** 2025-05-29 00:58:23.237612 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.237616 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.237620 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.237623 | orchestrator | 2025-05-29 00:58:23.237627 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-05-29 00:58:23.237631 | orchestrator | Thursday 29 May 2025 00:56:03 +0000 (0:00:00.340) 0:10:48.279 ********** 2025-05-29 00:58:23.237634 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.237638 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.237642 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.237645 | orchestrator | 2025-05-29 00:58:23.237649 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-05-29 00:58:23.237656 | orchestrator | Thursday 29 May 2025 00:56:03 +0000 (0:00:00.327) 0:10:48.607 ********** 2025-05-29 00:58:23.237659 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.237663 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.237667 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.237670 | orchestrator | 2025-05-29 00:58:23.237674 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-05-29 00:58:23.237678 | orchestrator | Thursday 29 May 2025 00:56:04 +0000 (0:00:00.651) 0:10:49.258 ********** 2025-05-29 00:58:23.237682 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-05-29 00:58:23.237685 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-05-29 00:58:23.237689 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.237693 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-05-29 00:58:23.237696 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-05-29 00:58:23.237700 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.237704 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-05-29 00:58:23.237707 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-05-29 00:58:23.237711 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.237715 | orchestrator | 2025-05-29 00:58:23.237718 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-05-29 00:58:23.237724 | orchestrator | Thursday 29 May 2025 00:56:04 +0000 (0:00:00.373) 0:10:49.632 ********** 2025-05-29 00:58:23.237728 | orchestrator | skipping: [testbed-node-3] => (item=osd memory target)  2025-05-29 00:58:23.237732 | orchestrator | skipping: [testbed-node-3] => (item=osd_memory_target)  2025-05-29 00:58:23.237736 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.237739 | orchestrator | skipping: [testbed-node-4] => (item=osd memory target)  2025-05-29 00:58:23.237743 | orchestrator | skipping: [testbed-node-4] => (item=osd_memory_target)  2025-05-29 00:58:23.237747 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.237751 | orchestrator | skipping: [testbed-node-5] => (item=osd memory target)  2025-05-29 00:58:23.237754 | orchestrator | skipping: [testbed-node-5] => (item=osd_memory_target)  2025-05-29 00:58:23.237758 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.237762 | orchestrator | 2025-05-29 00:58:23.237765 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-05-29 00:58:23.237769 | orchestrator | Thursday 29 May 2025 00:56:05 +0000 (0:00:00.393) 0:10:50.025 ********** 2025-05-29 00:58:23.237773 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.237776 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.237780 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.237784 | orchestrator | 2025-05-29 00:58:23.237787 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-05-29 00:58:23.237791 | orchestrator | Thursday 29 May 2025 00:56:05 +0000 (0:00:00.358) 0:10:50.383 ********** 2025-05-29 00:58:23.237795 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.237799 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.237802 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.237806 | orchestrator | 2025-05-29 00:58:23.237810 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-05-29 00:58:23.237813 | orchestrator | Thursday 29 May 2025 00:56:06 +0000 (0:00:00.678) 0:10:51.062 ********** 2025-05-29 00:58:23.237817 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.237821 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.237825 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.237828 | orchestrator | 2025-05-29 00:58:23.237832 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-05-29 00:58:23.237836 | orchestrator | Thursday 29 May 2025 00:56:06 +0000 (0:00:00.339) 0:10:51.402 ********** 2025-05-29 00:58:23.237839 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.237843 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.237847 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.237855 | orchestrator | 2025-05-29 00:58:23.237859 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-05-29 00:58:23.237863 | orchestrator | Thursday 29 May 2025 00:56:07 +0000 (0:00:00.345) 0:10:51.748 ********** 2025-05-29 00:58:23.237866 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.237870 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.237874 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.237877 | orchestrator | 2025-05-29 00:58:23.237881 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-05-29 00:58:23.237885 | orchestrator | Thursday 29 May 2025 00:56:07 +0000 (0:00:00.306) 0:10:52.054 ********** 2025-05-29 00:58:23.237888 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.237892 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.237896 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.237902 | orchestrator | 2025-05-29 00:58:23.237908 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-05-29 00:58:23.237914 | orchestrator | Thursday 29 May 2025 00:56:08 +0000 (0:00:00.628) 0:10:52.682 ********** 2025-05-29 00:58:23.237920 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-29 00:58:23.237929 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-29 00:58:23.237934 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-29 00:58:23.237940 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.237947 | orchestrator | 2025-05-29 00:58:23.237951 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-05-29 00:58:23.237954 | orchestrator | Thursday 29 May 2025 00:56:08 +0000 (0:00:00.470) 0:10:53.153 ********** 2025-05-29 00:58:23.237958 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-29 00:58:23.237962 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-29 00:58:23.237966 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-29 00:58:23.237969 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.237973 | orchestrator | 2025-05-29 00:58:23.237979 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-05-29 00:58:23.237985 | orchestrator | Thursday 29 May 2025 00:56:08 +0000 (0:00:00.429) 0:10:53.583 ********** 2025-05-29 00:58:23.237992 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-29 00:58:23.237998 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-29 00:58:23.238004 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-29 00:58:23.238011 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.238033 | orchestrator | 2025-05-29 00:58:23.238041 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-29 00:58:23.238047 | orchestrator | Thursday 29 May 2025 00:56:09 +0000 (0:00:00.422) 0:10:54.005 ********** 2025-05-29 00:58:23.238053 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.238059 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.238063 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.238067 | orchestrator | 2025-05-29 00:58:23.238070 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-05-29 00:58:23.238074 | orchestrator | Thursday 29 May 2025 00:56:09 +0000 (0:00:00.379) 0:10:54.385 ********** 2025-05-29 00:58:23.238078 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-05-29 00:58:23.238082 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.238085 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-05-29 00:58:23.238089 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.238096 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-05-29 00:58:23.238099 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.238103 | orchestrator | 2025-05-29 00:58:23.238107 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-05-29 00:58:23.238111 | orchestrator | Thursday 29 May 2025 00:56:10 +0000 (0:00:00.544) 0:10:54.929 ********** 2025-05-29 00:58:23.238118 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.238122 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.238125 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.238129 | orchestrator | 2025-05-29 00:58:23.238133 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-29 00:58:23.238137 | orchestrator | Thursday 29 May 2025 00:56:10 +0000 (0:00:00.651) 0:10:55.581 ********** 2025-05-29 00:58:23.238140 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.238144 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.238148 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.238152 | orchestrator | 2025-05-29 00:58:23.238155 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-05-29 00:58:23.238159 | orchestrator | Thursday 29 May 2025 00:56:11 +0000 (0:00:00.345) 0:10:55.927 ********** 2025-05-29 00:58:23.238163 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-05-29 00:58:23.238167 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.238170 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-05-29 00:58:23.238174 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.238178 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-05-29 00:58:23.238182 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.238185 | orchestrator | 2025-05-29 00:58:23.238189 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-05-29 00:58:23.238193 | orchestrator | Thursday 29 May 2025 00:56:11 +0000 (0:00:00.490) 0:10:56.417 ********** 2025-05-29 00:58:23.238197 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-05-29 00:58:23.238201 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.238204 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-05-29 00:58:23.238208 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.238245 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-05-29 00:58:23.238249 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.238253 | orchestrator | 2025-05-29 00:58:23.238257 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-05-29 00:58:23.238261 | orchestrator | Thursday 29 May 2025 00:56:12 +0000 (0:00:00.348) 0:10:56.766 ********** 2025-05-29 00:58:23.238265 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-29 00:58:23.238268 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-29 00:58:23.238272 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-29 00:58:23.238276 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.238280 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-05-29 00:58:23.238283 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-05-29 00:58:23.238287 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-05-29 00:58:23.238291 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.238295 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-05-29 00:58:23.238298 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-05-29 00:58:23.238305 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-05-29 00:58:23.238309 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.238312 | orchestrator | 2025-05-29 00:58:23.238316 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-05-29 00:58:23.238320 | orchestrator | Thursday 29 May 2025 00:56:13 +0000 (0:00:01.092) 0:10:57.859 ********** 2025-05-29 00:58:23.238324 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.238327 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.238331 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.238335 | orchestrator | 2025-05-29 00:58:23.238342 | orchestrator | TASK [ceph-rgw : create rgw keyrings] ****************************************** 2025-05-29 00:58:23.238346 | orchestrator | Thursday 29 May 2025 00:56:13 +0000 (0:00:00.526) 0:10:58.385 ********** 2025-05-29 00:58:23.238350 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-29 00:58:23.238353 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.238357 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-05-29 00:58:23.238361 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.238364 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-05-29 00:58:23.238368 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.238372 | orchestrator | 2025-05-29 00:58:23.238375 | orchestrator | TASK [ceph-rgw : include_tasks multisite] ************************************** 2025-05-29 00:58:23.238379 | orchestrator | Thursday 29 May 2025 00:56:14 +0000 (0:00:00.821) 0:10:59.207 ********** 2025-05-29 00:58:23.238383 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.238386 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.238390 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.238394 | orchestrator | 2025-05-29 00:58:23.238397 | orchestrator | TASK [ceph-handler : set_fact multisite_called_from_handler_role] ************** 2025-05-29 00:58:23.238401 | orchestrator | Thursday 29 May 2025 00:56:15 +0000 (0:00:00.564) 0:10:59.771 ********** 2025-05-29 00:58:23.238405 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.238409 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.238412 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.238416 | orchestrator | 2025-05-29 00:58:23.238420 | orchestrator | TASK [ceph-mds : include create_mds_filesystems.yml] *************************** 2025-05-29 00:58:23.238423 | orchestrator | Thursday 29 May 2025 00:56:16 +0000 (0:00:00.989) 0:11:00.760 ********** 2025-05-29 00:58:23.238427 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.238433 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.238437 | orchestrator | included: /ansible/roles/ceph-mds/tasks/create_mds_filesystems.yml for testbed-node-3 2025-05-29 00:58:23.238441 | orchestrator | 2025-05-29 00:58:23.238444 | orchestrator | TASK [ceph-facts : get current default crush rule details] ********************* 2025-05-29 00:58:23.238448 | orchestrator | Thursday 29 May 2025 00:56:16 +0000 (0:00:00.402) 0:11:01.162 ********** 2025-05-29 00:58:23.238452 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-05-29 00:58:23.238456 | orchestrator | 2025-05-29 00:58:23.238459 | orchestrator | TASK [ceph-facts : get current default crush rule name] ************************ 2025-05-29 00:58:23.238463 | orchestrator | Thursday 29 May 2025 00:56:18 +0000 (0:00:01.928) 0:11:03.091 ********** 2025-05-29 00:58:23.238468 | orchestrator | skipping: [testbed-node-3] => (item={'rule_id': 0, 'rule_name': 'replicated_rule', 'type': 1, 'steps': [{'op': 'take', 'item': -1, 'item_name': 'default'}, {'op': 'chooseleaf_firstn', 'num': 0, 'type': 'host'}, {'op': 'emit'}]})  2025-05-29 00:58:23.238474 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.238477 | orchestrator | 2025-05-29 00:58:23.238481 | orchestrator | TASK [ceph-mds : create filesystem pools] ************************************** 2025-05-29 00:58:23.238485 | orchestrator | Thursday 29 May 2025 00:56:18 +0000 (0:00:00.414) 0:11:03.505 ********** 2025-05-29 00:58:23.238491 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_data', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-05-29 00:58:23.238499 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'application': 'cephfs', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'cephfs_metadata', 'pg_num': 16, 'pgp_num': 16, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-05-29 00:58:23.238503 | orchestrator | 2025-05-29 00:58:23.238507 | orchestrator | TASK [ceph-mds : create ceph filesystem] *************************************** 2025-05-29 00:58:23.238511 | orchestrator | Thursday 29 May 2025 00:56:25 +0000 (0:00:06.928) 0:11:10.434 ********** 2025-05-29 00:58:23.238519 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] 2025-05-29 00:58:23.238522 | orchestrator | 2025-05-29 00:58:23.238526 | orchestrator | TASK [ceph-mds : include common.yml] ******************************************* 2025-05-29 00:58:23.238530 | orchestrator | Thursday 29 May 2025 00:56:28 +0000 (0:00:02.995) 0:11:13.430 ********** 2025-05-29 00:58:23.238534 | orchestrator | included: /ansible/roles/ceph-mds/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-29 00:58:23.238537 | orchestrator | 2025-05-29 00:58:23.238541 | orchestrator | TASK [ceph-mds : create bootstrap-mds and mds directories] ********************* 2025-05-29 00:58:23.238545 | orchestrator | Thursday 29 May 2025 00:56:29 +0000 (0:00:00.527) 0:11:13.958 ********** 2025-05-29 00:58:23.238548 | orchestrator | ok: [testbed-node-3] => (item=/var/lib/ceph/bootstrap-mds/) 2025-05-29 00:58:23.238552 | orchestrator | ok: [testbed-node-4] => (item=/var/lib/ceph/bootstrap-mds/) 2025-05-29 00:58:23.238556 | orchestrator | ok: [testbed-node-5] => (item=/var/lib/ceph/bootstrap-mds/) 2025-05-29 00:58:23.238560 | orchestrator | changed: [testbed-node-3] => (item=/var/lib/ceph/mds/ceph-testbed-node-3) 2025-05-29 00:58:23.238565 | orchestrator | changed: [testbed-node-4] => (item=/var/lib/ceph/mds/ceph-testbed-node-4) 2025-05-29 00:58:23.238569 | orchestrator | changed: [testbed-node-5] => (item=/var/lib/ceph/mds/ceph-testbed-node-5) 2025-05-29 00:58:23.238573 | orchestrator | 2025-05-29 00:58:23.238577 | orchestrator | TASK [ceph-mds : get keys from monitors] *************************************** 2025-05-29 00:58:23.238580 | orchestrator | Thursday 29 May 2025 00:56:30 +0000 (0:00:01.196) 0:11:15.154 ********** 2025-05-29 00:58:23.238584 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-29 00:58:23.238588 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-29 00:58:23.238592 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-05-29 00:58:23.238595 | orchestrator | 2025-05-29 00:58:23.238599 | orchestrator | TASK [ceph-mds : copy ceph key(s) if needed] *********************************** 2025-05-29 00:58:23.238603 | orchestrator | Thursday 29 May 2025 00:56:32 +0000 (0:00:01.768) 0:11:16.923 ********** 2025-05-29 00:58:23.238606 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-05-29 00:58:23.238610 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-29 00:58:23.238614 | orchestrator | changed: [testbed-node-3] 2025-05-29 00:58:23.238618 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-05-29 00:58:23.238621 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-05-29 00:58:23.238625 | orchestrator | changed: [testbed-node-4] 2025-05-29 00:58:23.238629 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-05-29 00:58:23.238632 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-05-29 00:58:23.238636 | orchestrator | changed: [testbed-node-5] 2025-05-29 00:58:23.238640 | orchestrator | 2025-05-29 00:58:23.238644 | orchestrator | TASK [ceph-mds : non_containerized.yml] **************************************** 2025-05-29 00:58:23.238647 | orchestrator | Thursday 29 May 2025 00:56:33 +0000 (0:00:01.139) 0:11:18.063 ********** 2025-05-29 00:58:23.238651 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.238655 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.238659 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.238662 | orchestrator | 2025-05-29 00:58:23.238666 | orchestrator | TASK [ceph-mds : containerized.yml] ******************************************** 2025-05-29 00:58:23.238670 | orchestrator | Thursday 29 May 2025 00:56:34 +0000 (0:00:00.594) 0:11:18.658 ********** 2025-05-29 00:58:23.238676 | orchestrator | included: /ansible/roles/ceph-mds/tasks/containerized.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-29 00:58:23.238680 | orchestrator | 2025-05-29 00:58:23.238684 | orchestrator | TASK [ceph-mds : include_tasks systemd.yml] ************************************ 2025-05-29 00:58:23.238688 | orchestrator | Thursday 29 May 2025 00:56:34 +0000 (0:00:00.706) 0:11:19.364 ********** 2025-05-29 00:58:23.238691 | orchestrator | included: /ansible/roles/ceph-mds/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-29 00:58:23.238698 | orchestrator | 2025-05-29 00:58:23.238702 | orchestrator | TASK [ceph-mds : generate systemd unit file] *********************************** 2025-05-29 00:58:23.238705 | orchestrator | Thursday 29 May 2025 00:56:35 +0000 (0:00:00.846) 0:11:20.211 ********** 2025-05-29 00:58:23.238709 | orchestrator | changed: [testbed-node-3] 2025-05-29 00:58:23.238713 | orchestrator | changed: [testbed-node-4] 2025-05-29 00:58:23.238716 | orchestrator | changed: [testbed-node-5] 2025-05-29 00:58:23.238720 | orchestrator | 2025-05-29 00:58:23.238724 | orchestrator | TASK [ceph-mds : generate systemd ceph-mds target file] ************************ 2025-05-29 00:58:23.238728 | orchestrator | Thursday 29 May 2025 00:56:36 +0000 (0:00:01.196) 0:11:21.408 ********** 2025-05-29 00:58:23.238731 | orchestrator | changed: [testbed-node-3] 2025-05-29 00:58:23.238735 | orchestrator | changed: [testbed-node-4] 2025-05-29 00:58:23.238739 | orchestrator | changed: [testbed-node-5] 2025-05-29 00:58:23.238743 | orchestrator | 2025-05-29 00:58:23.238746 | orchestrator | TASK [ceph-mds : enable ceph-mds.target] *************************************** 2025-05-29 00:58:23.238750 | orchestrator | Thursday 29 May 2025 00:56:37 +0000 (0:00:01.185) 0:11:22.594 ********** 2025-05-29 00:58:23.238754 | orchestrator | changed: [testbed-node-3] 2025-05-29 00:58:23.238758 | orchestrator | changed: [testbed-node-4] 2025-05-29 00:58:23.238761 | orchestrator | changed: [testbed-node-5] 2025-05-29 00:58:23.238765 | orchestrator | 2025-05-29 00:58:23.238769 | orchestrator | TASK [ceph-mds : systemd start mds container] ********************************** 2025-05-29 00:58:23.238773 | orchestrator | Thursday 29 May 2025 00:56:39 +0000 (0:00:01.867) 0:11:24.461 ********** 2025-05-29 00:58:23.238776 | orchestrator | changed: [testbed-node-5] 2025-05-29 00:58:23.238780 | orchestrator | changed: [testbed-node-4] 2025-05-29 00:58:23.238784 | orchestrator | changed: [testbed-node-3] 2025-05-29 00:58:23.238787 | orchestrator | 2025-05-29 00:58:23.238791 | orchestrator | TASK [ceph-mds : wait for mds socket to exist] ********************************* 2025-05-29 00:58:23.238795 | orchestrator | Thursday 29 May 2025 00:56:41 +0000 (0:00:01.862) 0:11:26.323 ********** 2025-05-29 00:58:23.238799 | orchestrator | FAILED - RETRYING: [testbed-node-3]: wait for mds socket to exist (5 retries left). 2025-05-29 00:58:23.238802 | orchestrator | FAILED - RETRYING: [testbed-node-4]: wait for mds socket to exist (5 retries left). 2025-05-29 00:58:23.238806 | orchestrator | FAILED - RETRYING: [testbed-node-5]: wait for mds socket to exist (5 retries left). 2025-05-29 00:58:23.238810 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:58:23.238814 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:58:23.238817 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:58:23.238821 | orchestrator | 2025-05-29 00:58:23.238825 | orchestrator | RUNNING HANDLER [ceph-handler : make tempdir for scripts] ********************** 2025-05-29 00:58:23.238829 | orchestrator | Thursday 29 May 2025 00:56:58 +0000 (0:00:17.013) 0:11:43.337 ********** 2025-05-29 00:58:23.238832 | orchestrator | changed: [testbed-node-3] 2025-05-29 00:58:23.238836 | orchestrator | changed: [testbed-node-4] 2025-05-29 00:58:23.238840 | orchestrator | changed: [testbed-node-5] 2025-05-29 00:58:23.238843 | orchestrator | 2025-05-29 00:58:23.238847 | orchestrator | RUNNING HANDLER [ceph-handler : mdss handler] ********************************** 2025-05-29 00:58:23.238851 | orchestrator | Thursday 29 May 2025 00:56:59 +0000 (0:00:00.689) 0:11:44.026 ********** 2025-05-29 00:58:23.238855 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_mdss.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-29 00:58:23.238859 | orchestrator | 2025-05-29 00:58:23.238864 | orchestrator | RUNNING HANDLER [ceph-handler : set _mds_handler_called before restart] ******** 2025-05-29 00:58:23.238868 | orchestrator | Thursday 29 May 2025 00:57:00 +0000 (0:00:00.744) 0:11:44.770 ********** 2025-05-29 00:58:23.238872 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:58:23.238875 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:58:23.238879 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:58:23.238883 | orchestrator | 2025-05-29 00:58:23.238887 | orchestrator | RUNNING HANDLER [ceph-handler : copy mds restart script] *********************** 2025-05-29 00:58:23.238890 | orchestrator | Thursday 29 May 2025 00:57:00 +0000 (0:00:00.345) 0:11:45.116 ********** 2025-05-29 00:58:23.238898 | orchestrator | changed: [testbed-node-3] 2025-05-29 00:58:23.238902 | orchestrator | changed: [testbed-node-4] 2025-05-29 00:58:23.238906 | orchestrator | changed: [testbed-node-5] 2025-05-29 00:58:23.238910 | orchestrator | 2025-05-29 00:58:23.238913 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph mds daemon(s)] ******************** 2025-05-29 00:58:23.238917 | orchestrator | Thursday 29 May 2025 00:57:01 +0000 (0:00:01.141) 0:11:46.258 ********** 2025-05-29 00:58:23.238921 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-29 00:58:23.238925 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-29 00:58:23.238928 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-29 00:58:23.238932 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.238936 | orchestrator | 2025-05-29 00:58:23.238940 | orchestrator | RUNNING HANDLER [ceph-handler : set _mds_handler_called after restart] ********* 2025-05-29 00:58:23.238943 | orchestrator | Thursday 29 May 2025 00:57:02 +0000 (0:00:01.307) 0:11:47.565 ********** 2025-05-29 00:58:23.238947 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:58:23.238951 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:58:23.238955 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:58:23.238958 | orchestrator | 2025-05-29 00:58:23.238962 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-05-29 00:58:23.238966 | orchestrator | Thursday 29 May 2025 00:57:03 +0000 (0:00:00.313) 0:11:47.879 ********** 2025-05-29 00:58:23.238969 | orchestrator | changed: [testbed-node-3] 2025-05-29 00:58:23.238973 | orchestrator | changed: [testbed-node-4] 2025-05-29 00:58:23.238977 | orchestrator | changed: [testbed-node-5] 2025-05-29 00:58:23.238981 | orchestrator | 2025-05-29 00:58:23.238984 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-05-29 00:58:23.238988 | orchestrator | 2025-05-29 00:58:23.238994 | orchestrator | TASK [ceph-handler : include check_running_containers.yml] ********************* 2025-05-29 00:58:23.238998 | orchestrator | Thursday 29 May 2025 00:57:05 +0000 (0:00:02.187) 0:11:50.066 ********** 2025-05-29 00:58:23.239001 | orchestrator | included: /ansible/roles/ceph-handler/tasks/check_running_containers.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-29 00:58:23.239005 | orchestrator | 2025-05-29 00:58:23.239009 | orchestrator | TASK [ceph-handler : check for a mon container] ******************************** 2025-05-29 00:58:23.239013 | orchestrator | Thursday 29 May 2025 00:57:06 +0000 (0:00:00.762) 0:11:50.829 ********** 2025-05-29 00:58:23.239016 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.239020 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.239024 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.239028 | orchestrator | 2025-05-29 00:58:23.239031 | orchestrator | TASK [ceph-handler : check for an osd container] ******************************* 2025-05-29 00:58:23.239035 | orchestrator | Thursday 29 May 2025 00:57:06 +0000 (0:00:00.351) 0:11:51.181 ********** 2025-05-29 00:58:23.239039 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:58:23.239043 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:58:23.239046 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:58:23.239050 | orchestrator | 2025-05-29 00:58:23.239054 | orchestrator | TASK [ceph-handler : check for a mds container] ******************************** 2025-05-29 00:58:23.239057 | orchestrator | Thursday 29 May 2025 00:57:07 +0000 (0:00:00.727) 0:11:51.908 ********** 2025-05-29 00:58:23.239061 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:58:23.239065 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:58:23.239069 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:58:23.239072 | orchestrator | 2025-05-29 00:58:23.239076 | orchestrator | TASK [ceph-handler : check for a rgw container] ******************************** 2025-05-29 00:58:23.239080 | orchestrator | Thursday 29 May 2025 00:57:08 +0000 (0:00:00.852) 0:11:52.761 ********** 2025-05-29 00:58:23.239084 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:58:23.239087 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:58:23.239091 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:58:23.239095 | orchestrator | 2025-05-29 00:58:23.239102 | orchestrator | TASK [ceph-handler : check for a mgr container] ******************************** 2025-05-29 00:58:23.239106 | orchestrator | Thursday 29 May 2025 00:57:08 +0000 (0:00:00.739) 0:11:53.501 ********** 2025-05-29 00:58:23.239109 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.239113 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.239117 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.239121 | orchestrator | 2025-05-29 00:58:23.239124 | orchestrator | TASK [ceph-handler : check for a rbd mirror container] ************************* 2025-05-29 00:58:23.239128 | orchestrator | Thursday 29 May 2025 00:57:09 +0000 (0:00:00.320) 0:11:53.821 ********** 2025-05-29 00:58:23.239132 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.239135 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.239139 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.239143 | orchestrator | 2025-05-29 00:58:23.239147 | orchestrator | TASK [ceph-handler : check for a nfs container] ******************************** 2025-05-29 00:58:23.239150 | orchestrator | Thursday 29 May 2025 00:57:09 +0000 (0:00:00.332) 0:11:54.154 ********** 2025-05-29 00:58:23.239154 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.239158 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.239161 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.239165 | orchestrator | 2025-05-29 00:58:23.239169 | orchestrator | TASK [ceph-handler : check for a tcmu-runner container] ************************ 2025-05-29 00:58:23.239173 | orchestrator | Thursday 29 May 2025 00:57:10 +0000 (0:00:00.732) 0:11:54.886 ********** 2025-05-29 00:58:23.239176 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.239180 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.239184 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.239188 | orchestrator | 2025-05-29 00:58:23.239191 | orchestrator | TASK [ceph-handler : check for a rbd-target-api container] ********************* 2025-05-29 00:58:23.239197 | orchestrator | Thursday 29 May 2025 00:57:10 +0000 (0:00:00.357) 0:11:55.244 ********** 2025-05-29 00:58:23.239201 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.239205 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.239208 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.239226 | orchestrator | 2025-05-29 00:58:23.239230 | orchestrator | TASK [ceph-handler : check for a rbd-target-gw container] ********************** 2025-05-29 00:58:23.239234 | orchestrator | Thursday 29 May 2025 00:57:10 +0000 (0:00:00.324) 0:11:55.569 ********** 2025-05-29 00:58:23.239238 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.239242 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.239245 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.239249 | orchestrator | 2025-05-29 00:58:23.239253 | orchestrator | TASK [ceph-handler : check for a ceph-crash container] ************************* 2025-05-29 00:58:23.239257 | orchestrator | Thursday 29 May 2025 00:57:11 +0000 (0:00:00.310) 0:11:55.879 ********** 2025-05-29 00:58:23.239261 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:58:23.239264 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:58:23.239268 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:58:23.239272 | orchestrator | 2025-05-29 00:58:23.239276 | orchestrator | TASK [ceph-handler : include check_socket_non_container.yml] ******************* 2025-05-29 00:58:23.239280 | orchestrator | Thursday 29 May 2025 00:57:12 +0000 (0:00:00.990) 0:11:56.870 ********** 2025-05-29 00:58:23.239283 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.239287 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.239291 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.239295 | orchestrator | 2025-05-29 00:58:23.239298 | orchestrator | TASK [ceph-handler : set_fact handler_mon_status] ****************************** 2025-05-29 00:58:23.239302 | orchestrator | Thursday 29 May 2025 00:57:12 +0000 (0:00:00.318) 0:11:57.189 ********** 2025-05-29 00:58:23.239306 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.239310 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.239313 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.239317 | orchestrator | 2025-05-29 00:58:23.239321 | orchestrator | TASK [ceph-handler : set_fact handler_osd_status] ****************************** 2025-05-29 00:58:23.239328 | orchestrator | Thursday 29 May 2025 00:57:12 +0000 (0:00:00.347) 0:11:57.537 ********** 2025-05-29 00:58:23.239332 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:58:23.239335 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:58:23.239339 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:58:23.239343 | orchestrator | 2025-05-29 00:58:23.239349 | orchestrator | TASK [ceph-handler : set_fact handler_mds_status] ****************************** 2025-05-29 00:58:23.239353 | orchestrator | Thursday 29 May 2025 00:57:13 +0000 (0:00:00.345) 0:11:57.883 ********** 2025-05-29 00:58:23.239357 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:58:23.239360 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:58:23.239364 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:58:23.239368 | orchestrator | 2025-05-29 00:58:23.239371 | orchestrator | TASK [ceph-handler : set_fact handler_rgw_status] ****************************** 2025-05-29 00:58:23.239375 | orchestrator | Thursday 29 May 2025 00:57:13 +0000 (0:00:00.620) 0:11:58.503 ********** 2025-05-29 00:58:23.239379 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:58:23.239383 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:58:23.239386 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:58:23.239390 | orchestrator | 2025-05-29 00:58:23.239394 | orchestrator | TASK [ceph-handler : set_fact handler_nfs_status] ****************************** 2025-05-29 00:58:23.239398 | orchestrator | Thursday 29 May 2025 00:57:14 +0000 (0:00:00.369) 0:11:58.872 ********** 2025-05-29 00:58:23.239401 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.239405 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.239409 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.239413 | orchestrator | 2025-05-29 00:58:23.239416 | orchestrator | TASK [ceph-handler : set_fact handler_rbd_status] ****************************** 2025-05-29 00:58:23.239420 | orchestrator | Thursday 29 May 2025 00:57:14 +0000 (0:00:00.329) 0:11:59.202 ********** 2025-05-29 00:58:23.239424 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.239427 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.239431 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.239435 | orchestrator | 2025-05-29 00:58:23.239439 | orchestrator | TASK [ceph-handler : set_fact handler_mgr_status] ****************************** 2025-05-29 00:58:23.239442 | orchestrator | Thursday 29 May 2025 00:57:14 +0000 (0:00:00.320) 0:11:59.522 ********** 2025-05-29 00:58:23.239446 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.239450 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.239453 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.239457 | orchestrator | 2025-05-29 00:58:23.239461 | orchestrator | TASK [ceph-handler : set_fact handler_crash_status] **************************** 2025-05-29 00:58:23.239465 | orchestrator | Thursday 29 May 2025 00:57:15 +0000 (0:00:00.617) 0:12:00.139 ********** 2025-05-29 00:58:23.239468 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:58:23.239472 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:58:23.239476 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:58:23.239480 | orchestrator | 2025-05-29 00:58:23.239483 | orchestrator | TASK [ceph-config : include create_ceph_initial_dirs.yml] ********************** 2025-05-29 00:58:23.239487 | orchestrator | Thursday 29 May 2025 00:57:15 +0000 (0:00:00.365) 0:12:00.504 ********** 2025-05-29 00:58:23.239491 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.239494 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.239498 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.239502 | orchestrator | 2025-05-29 00:58:23.239505 | orchestrator | TASK [ceph-config : include_tasks rgw_systemd_environment_file.yml] ************ 2025-05-29 00:58:23.239509 | orchestrator | Thursday 29 May 2025 00:57:16 +0000 (0:00:00.343) 0:12:00.848 ********** 2025-05-29 00:58:23.239513 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.239517 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.239521 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.239524 | orchestrator | 2025-05-29 00:58:23.239528 | orchestrator | TASK [ceph-config : reset num_osds] ******************************************** 2025-05-29 00:58:23.239532 | orchestrator | Thursday 29 May 2025 00:57:16 +0000 (0:00:00.355) 0:12:01.203 ********** 2025-05-29 00:58:23.239538 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.239542 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.239546 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.239550 | orchestrator | 2025-05-29 00:58:23.239553 | orchestrator | TASK [ceph-config : count number of osds for lvm scenario] ********************* 2025-05-29 00:58:23.239559 | orchestrator | Thursday 29 May 2025 00:57:17 +0000 (0:00:00.620) 0:12:01.823 ********** 2025-05-29 00:58:23.239563 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.239567 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.239570 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.239574 | orchestrator | 2025-05-29 00:58:23.239578 | orchestrator | TASK [ceph-config : look up for ceph-volume rejected devices] ****************** 2025-05-29 00:58:23.239582 | orchestrator | Thursday 29 May 2025 00:57:17 +0000 (0:00:00.355) 0:12:02.179 ********** 2025-05-29 00:58:23.239585 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.239589 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.239593 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.239597 | orchestrator | 2025-05-29 00:58:23.239600 | orchestrator | TASK [ceph-config : set_fact rejected_devices] ********************************* 2025-05-29 00:58:23.239604 | orchestrator | Thursday 29 May 2025 00:57:17 +0000 (0:00:00.337) 0:12:02.516 ********** 2025-05-29 00:58:23.239608 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.239612 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.239615 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.239619 | orchestrator | 2025-05-29 00:58:23.239623 | orchestrator | TASK [ceph-config : set_fact _devices] ***************************************** 2025-05-29 00:58:23.239627 | orchestrator | Thursday 29 May 2025 00:57:18 +0000 (0:00:00.322) 0:12:02.839 ********** 2025-05-29 00:58:23.239630 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.239634 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.239638 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.239641 | orchestrator | 2025-05-29 00:58:23.239645 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm batch --report' to see how many osds are to be created] *** 2025-05-29 00:58:23.239649 | orchestrator | Thursday 29 May 2025 00:57:18 +0000 (0:00:00.623) 0:12:03.462 ********** 2025-05-29 00:58:23.239653 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.239656 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.239660 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.239664 | orchestrator | 2025-05-29 00:58:23.239667 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (legacy report)] *** 2025-05-29 00:58:23.239671 | orchestrator | Thursday 29 May 2025 00:57:19 +0000 (0:00:00.362) 0:12:03.825 ********** 2025-05-29 00:58:23.239675 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.239679 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.239685 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.239689 | orchestrator | 2025-05-29 00:58:23.239692 | orchestrator | TASK [ceph-config : set_fact num_osds from the output of 'ceph-volume lvm batch --report' (new report)] *** 2025-05-29 00:58:23.239696 | orchestrator | Thursday 29 May 2025 00:57:19 +0000 (0:00:00.329) 0:12:04.155 ********** 2025-05-29 00:58:23.239700 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.239704 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.239707 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.239711 | orchestrator | 2025-05-29 00:58:23.239715 | orchestrator | TASK [ceph-config : run 'ceph-volume lvm list' to see how many osds have already been created] *** 2025-05-29 00:58:23.239719 | orchestrator | Thursday 29 May 2025 00:57:19 +0000 (0:00:00.347) 0:12:04.503 ********** 2025-05-29 00:58:23.239722 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.239726 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.239730 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.239733 | orchestrator | 2025-05-29 00:58:23.239737 | orchestrator | TASK [ceph-config : set_fact num_osds (add existing osds)] ********************* 2025-05-29 00:58:23.239744 | orchestrator | Thursday 29 May 2025 00:57:20 +0000 (0:00:00.668) 0:12:05.171 ********** 2025-05-29 00:58:23.239748 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.239751 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.239755 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.239759 | orchestrator | 2025-05-29 00:58:23.239763 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target, override from ceph_conf_overrides] *** 2025-05-29 00:58:23.239766 | orchestrator | Thursday 29 May 2025 00:57:20 +0000 (0:00:00.345) 0:12:05.516 ********** 2025-05-29 00:58:23.239770 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-05-29 00:58:23.239774 | orchestrator | skipping: [testbed-node-3] => (item=)  2025-05-29 00:58:23.239778 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.239781 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-05-29 00:58:23.239785 | orchestrator | skipping: [testbed-node-4] => (item=)  2025-05-29 00:58:23.239789 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.239793 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-05-29 00:58:23.239796 | orchestrator | skipping: [testbed-node-5] => (item=)  2025-05-29 00:58:23.239800 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.239804 | orchestrator | 2025-05-29 00:58:23.239808 | orchestrator | TASK [ceph-config : drop osd_memory_target from conf override] ***************** 2025-05-29 00:58:23.239811 | orchestrator | Thursday 29 May 2025 00:57:21 +0000 (0:00:00.389) 0:12:05.905 ********** 2025-05-29 00:58:23.239815 | orchestrator | skipping: [testbed-node-3] => (item=osd memory target)  2025-05-29 00:58:23.239819 | orchestrator | skipping: [testbed-node-3] => (item=osd_memory_target)  2025-05-29 00:58:23.239823 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.239826 | orchestrator | skipping: [testbed-node-4] => (item=osd memory target)  2025-05-29 00:58:23.239830 | orchestrator | skipping: [testbed-node-4] => (item=osd_memory_target)  2025-05-29 00:58:23.239834 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.239838 | orchestrator | skipping: [testbed-node-5] => (item=osd memory target)  2025-05-29 00:58:23.239841 | orchestrator | skipping: [testbed-node-5] => (item=osd_memory_target)  2025-05-29 00:58:23.239845 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.239849 | orchestrator | 2025-05-29 00:58:23.239853 | orchestrator | TASK [ceph-config : set_fact _osd_memory_target] ******************************* 2025-05-29 00:58:23.239857 | orchestrator | Thursday 29 May 2025 00:57:21 +0000 (0:00:00.438) 0:12:06.344 ********** 2025-05-29 00:58:23.239860 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.239864 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.239868 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.239871 | orchestrator | 2025-05-29 00:58:23.239875 | orchestrator | TASK [ceph-config : create ceph conf directory] ******************************** 2025-05-29 00:58:23.239881 | orchestrator | Thursday 29 May 2025 00:57:22 +0000 (0:00:00.698) 0:12:07.042 ********** 2025-05-29 00:58:23.239884 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.239888 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.239892 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.239896 | orchestrator | 2025-05-29 00:58:23.239899 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-05-29 00:58:23.239903 | orchestrator | Thursday 29 May 2025 00:57:22 +0000 (0:00:00.365) 0:12:07.407 ********** 2025-05-29 00:58:23.239907 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.239911 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.239914 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.239918 | orchestrator | 2025-05-29 00:58:23.239922 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-05-29 00:58:23.239926 | orchestrator | Thursday 29 May 2025 00:57:23 +0000 (0:00:00.357) 0:12:07.766 ********** 2025-05-29 00:58:23.239929 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.239933 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.239937 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.239943 | orchestrator | 2025-05-29 00:58:23.239947 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-05-29 00:58:23.239951 | orchestrator | Thursday 29 May 2025 00:57:23 +0000 (0:00:00.341) 0:12:08.107 ********** 2025-05-29 00:58:23.239955 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.239959 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.239962 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.239966 | orchestrator | 2025-05-29 00:58:23.239970 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-05-29 00:58:23.239974 | orchestrator | Thursday 29 May 2025 00:57:24 +0000 (0:00:00.631) 0:12:08.739 ********** 2025-05-29 00:58:23.239977 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.239981 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.239985 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.239988 | orchestrator | 2025-05-29 00:58:23.239992 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-05-29 00:58:23.239996 | orchestrator | Thursday 29 May 2025 00:57:24 +0000 (0:00:00.332) 0:12:09.071 ********** 2025-05-29 00:58:23.240002 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-29 00:58:23.240006 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-29 00:58:23.240010 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-29 00:58:23.240013 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.240017 | orchestrator | 2025-05-29 00:58:23.240021 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-05-29 00:58:23.240024 | orchestrator | Thursday 29 May 2025 00:57:24 +0000 (0:00:00.462) 0:12:09.534 ********** 2025-05-29 00:58:23.240028 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-29 00:58:23.240032 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-29 00:58:23.240036 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-29 00:58:23.240039 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.240043 | orchestrator | 2025-05-29 00:58:23.240047 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-05-29 00:58:23.240051 | orchestrator | Thursday 29 May 2025 00:57:25 +0000 (0:00:00.492) 0:12:10.027 ********** 2025-05-29 00:58:23.240055 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-29 00:58:23.240058 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-29 00:58:23.240062 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-29 00:58:23.240066 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.240069 | orchestrator | 2025-05-29 00:58:23.240073 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-29 00:58:23.240077 | orchestrator | Thursday 29 May 2025 00:57:25 +0000 (0:00:00.463) 0:12:10.490 ********** 2025-05-29 00:58:23.240081 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.240085 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.240088 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.240092 | orchestrator | 2025-05-29 00:58:23.240096 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-05-29 00:58:23.240100 | orchestrator | Thursday 29 May 2025 00:57:26 +0000 (0:00:00.341) 0:12:10.832 ********** 2025-05-29 00:58:23.240103 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-05-29 00:58:23.240107 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.240111 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-05-29 00:58:23.240115 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.240118 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-05-29 00:58:23.240122 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.240126 | orchestrator | 2025-05-29 00:58:23.240129 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-05-29 00:58:23.240133 | orchestrator | Thursday 29 May 2025 00:57:26 +0000 (0:00:00.801) 0:12:11.634 ********** 2025-05-29 00:58:23.240140 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.240144 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.240147 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.240151 | orchestrator | 2025-05-29 00:58:23.240155 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-29 00:58:23.240159 | orchestrator | Thursday 29 May 2025 00:57:27 +0000 (0:00:00.350) 0:12:11.984 ********** 2025-05-29 00:58:23.240162 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.240166 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.240170 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.240174 | orchestrator | 2025-05-29 00:58:23.240177 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-05-29 00:58:23.240181 | orchestrator | Thursday 29 May 2025 00:57:27 +0000 (0:00:00.333) 0:12:12.318 ********** 2025-05-29 00:58:23.240185 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-05-29 00:58:23.240189 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.240192 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-05-29 00:58:23.240198 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.240202 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-05-29 00:58:23.240205 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.240209 | orchestrator | 2025-05-29 00:58:23.240226 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-05-29 00:58:23.240230 | orchestrator | Thursday 29 May 2025 00:57:28 +0000 (0:00:00.482) 0:12:12.801 ********** 2025-05-29 00:58:23.240233 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-05-29 00:58:23.240237 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.240241 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-05-29 00:58:23.240245 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.240249 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-05-29 00:58:23.240252 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.240256 | orchestrator | 2025-05-29 00:58:23.240260 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-05-29 00:58:23.240264 | orchestrator | Thursday 29 May 2025 00:57:28 +0000 (0:00:00.702) 0:12:13.504 ********** 2025-05-29 00:58:23.240267 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-29 00:58:23.240271 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-29 00:58:23.240275 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-29 00:58:23.240278 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-05-29 00:58:23.240282 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-05-29 00:58:23.240286 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-05-29 00:58:23.240290 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.240293 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.240297 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-05-29 00:58:23.240303 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-05-29 00:58:23.240307 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-05-29 00:58:23.240310 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.240314 | orchestrator | 2025-05-29 00:58:23.240318 | orchestrator | TASK [ceph-config : generate ceph.conf configuration file] ********************* 2025-05-29 00:58:23.240322 | orchestrator | Thursday 29 May 2025 00:57:29 +0000 (0:00:00.620) 0:12:14.124 ********** 2025-05-29 00:58:23.240325 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.240329 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.240333 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.240336 | orchestrator | 2025-05-29 00:58:23.240344 | orchestrator | TASK [ceph-rgw : create rgw keyrings] ****************************************** 2025-05-29 00:58:23.240348 | orchestrator | Thursday 29 May 2025 00:57:30 +0000 (0:00:00.840) 0:12:14.965 ********** 2025-05-29 00:58:23.240352 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-29 00:58:23.240355 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.240359 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-05-29 00:58:23.240363 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.240367 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-05-29 00:58:23.240370 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.240374 | orchestrator | 2025-05-29 00:58:23.240378 | orchestrator | TASK [ceph-rgw : include_tasks multisite] ************************************** 2025-05-29 00:58:23.240382 | orchestrator | Thursday 29 May 2025 00:57:30 +0000 (0:00:00.571) 0:12:15.537 ********** 2025-05-29 00:58:23.240385 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.240389 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.240393 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.240396 | orchestrator | 2025-05-29 00:58:23.240400 | orchestrator | TASK [ceph-handler : set_fact multisite_called_from_handler_role] ************** 2025-05-29 00:58:23.240404 | orchestrator | Thursday 29 May 2025 00:57:31 +0000 (0:00:00.798) 0:12:16.336 ********** 2025-05-29 00:58:23.240408 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.240411 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.240415 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.240419 | orchestrator | 2025-05-29 00:58:23.240422 | orchestrator | TASK [ceph-rgw : include common.yml] ******************************************* 2025-05-29 00:58:23.240426 | orchestrator | Thursday 29 May 2025 00:57:32 +0000 (0:00:00.549) 0:12:16.885 ********** 2025-05-29 00:58:23.240430 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/common.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-29 00:58:23.240434 | orchestrator | 2025-05-29 00:58:23.240437 | orchestrator | TASK [ceph-rgw : create rados gateway directories] ***************************** 2025-05-29 00:58:23.240441 | orchestrator | Thursday 29 May 2025 00:57:33 +0000 (0:00:00.821) 0:12:17.707 ********** 2025-05-29 00:58:23.240445 | orchestrator | ok: [testbed-node-3] => (item=/var/run/ceph) 2025-05-29 00:58:23.240448 | orchestrator | ok: [testbed-node-4] => (item=/var/run/ceph) 2025-05-29 00:58:23.240452 | orchestrator | ok: [testbed-node-5] => (item=/var/run/ceph) 2025-05-29 00:58:23.240456 | orchestrator | 2025-05-29 00:58:23.240460 | orchestrator | TASK [ceph-rgw : get keys from monitors] *************************************** 2025-05-29 00:58:23.240463 | orchestrator | Thursday 29 May 2025 00:57:33 +0000 (0:00:00.675) 0:12:18.382 ********** 2025-05-29 00:58:23.240467 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-29 00:58:23.240471 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-29 00:58:23.240474 | orchestrator | ok: [testbed-node-3 -> {{ groups.get(mon_group_name)[0] }}] 2025-05-29 00:58:23.240478 | orchestrator | 2025-05-29 00:58:23.240482 | orchestrator | TASK [ceph-rgw : copy ceph key(s) if needed] *********************************** 2025-05-29 00:58:23.240486 | orchestrator | Thursday 29 May 2025 00:57:35 +0000 (0:00:01.874) 0:12:20.257 ********** 2025-05-29 00:58:23.240491 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-05-29 00:58:23.240495 | orchestrator | skipping: [testbed-node-3] => (item=None)  2025-05-29 00:58:23.240499 | orchestrator | changed: [testbed-node-3] 2025-05-29 00:58:23.240503 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-05-29 00:58:23.240507 | orchestrator | skipping: [testbed-node-4] => (item=None)  2025-05-29 00:58:23.240510 | orchestrator | changed: [testbed-node-4] 2025-05-29 00:58:23.240514 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-05-29 00:58:23.240518 | orchestrator | skipping: [testbed-node-5] => (item=None)  2025-05-29 00:58:23.240521 | orchestrator | changed: [testbed-node-5] 2025-05-29 00:58:23.240525 | orchestrator | 2025-05-29 00:58:23.240529 | orchestrator | TASK [ceph-rgw : copy SSL certificate & key data to certificate path] ********** 2025-05-29 00:58:23.240537 | orchestrator | Thursday 29 May 2025 00:57:36 +0000 (0:00:01.207) 0:12:21.464 ********** 2025-05-29 00:58:23.240543 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.240549 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.240555 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.240561 | orchestrator | 2025-05-29 00:58:23.240567 | orchestrator | TASK [ceph-rgw : include_tasks pre_requisite.yml] ****************************** 2025-05-29 00:58:23.240573 | orchestrator | Thursday 29 May 2025 00:57:37 +0000 (0:00:00.602) 0:12:22.067 ********** 2025-05-29 00:58:23.240579 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.240584 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.240591 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.240597 | orchestrator | 2025-05-29 00:58:23.240604 | orchestrator | TASK [ceph-rgw : rgw pool creation tasks] ************************************** 2025-05-29 00:58:23.240610 | orchestrator | Thursday 29 May 2025 00:57:37 +0000 (0:00:00.326) 0:12:22.394 ********** 2025-05-29 00:58:23.240617 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/rgw_create_pools.yml for testbed-node-3 2025-05-29 00:58:23.240621 | orchestrator | 2025-05-29 00:58:23.240625 | orchestrator | TASK [ceph-rgw : create ec profile] ******************************************** 2025-05-29 00:58:23.240629 | orchestrator | Thursday 29 May 2025 00:57:38 +0000 (0:00:00.246) 0:12:22.640 ********** 2025-05-29 00:58:23.240633 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-29 00:58:23.240640 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-29 00:58:23.240644 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-29 00:58:23.240647 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-29 00:58:23.240651 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-29 00:58:23.240655 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.240659 | orchestrator | 2025-05-29 00:58:23.240662 | orchestrator | TASK [ceph-rgw : set crush rule] *********************************************** 2025-05-29 00:58:23.240666 | orchestrator | Thursday 29 May 2025 00:57:38 +0000 (0:00:00.880) 0:12:23.521 ********** 2025-05-29 00:58:23.240670 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-29 00:58:23.240674 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-29 00:58:23.240678 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-29 00:58:23.240681 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-29 00:58:23.240685 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-29 00:58:23.240689 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.240692 | orchestrator | 2025-05-29 00:58:23.240696 | orchestrator | TASK [ceph-rgw : create ec pools for rgw] ************************************** 2025-05-29 00:58:23.240700 | orchestrator | Thursday 29 May 2025 00:57:39 +0000 (0:00:00.904) 0:12:24.425 ********** 2025-05-29 00:58:23.240704 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-29 00:58:23.240707 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-29 00:58:23.240711 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-29 00:58:23.240718 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-29 00:58:23.240722 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}})  2025-05-29 00:58:23.240726 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.240730 | orchestrator | 2025-05-29 00:58:23.240733 | orchestrator | TASK [ceph-rgw : create replicated pools for rgw] ****************************** 2025-05-29 00:58:23.240737 | orchestrator | Thursday 29 May 2025 00:57:40 +0000 (0:00:00.646) 0:12:25.072 ********** 2025-05-29 00:58:23.240744 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.data', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-05-29 00:58:23.240748 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.buckets.index', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-05-29 00:58:23.240751 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.control', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-05-29 00:58:23.240755 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.log', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-05-29 00:58:23.240759 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item={'key': 'default.rgw.meta', 'value': {'pg_num': 8, 'size': 3, 'type': 'replicated'}}) 2025-05-29 00:58:23.240763 | orchestrator | 2025-05-29 00:58:23.240767 | orchestrator | TASK [ceph-rgw : include_tasks openstack-keystone.yml] ************************* 2025-05-29 00:58:23.240771 | orchestrator | Thursday 29 May 2025 00:58:06 +0000 (0:00:26.047) 0:12:51.119 ********** 2025-05-29 00:58:23.240774 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.240778 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.240782 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.240786 | orchestrator | 2025-05-29 00:58:23.240789 | orchestrator | TASK [ceph-rgw : include_tasks start_radosgw.yml] ****************************** 2025-05-29 00:58:23.240793 | orchestrator | Thursday 29 May 2025 00:58:06 +0000 (0:00:00.490) 0:12:51.609 ********** 2025-05-29 00:58:23.240797 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.240800 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.240804 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.240808 | orchestrator | 2025-05-29 00:58:23.240811 | orchestrator | TASK [ceph-rgw : include start_docker_rgw.yml] ********************************* 2025-05-29 00:58:23.240815 | orchestrator | Thursday 29 May 2025 00:58:07 +0000 (0:00:00.354) 0:12:51.964 ********** 2025-05-29 00:58:23.240821 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/start_docker_rgw.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-29 00:58:23.240825 | orchestrator | 2025-05-29 00:58:23.240829 | orchestrator | TASK [ceph-rgw : include_task systemd.yml] ************************************* 2025-05-29 00:58:23.240833 | orchestrator | Thursday 29 May 2025 00:58:07 +0000 (0:00:00.553) 0:12:52.518 ********** 2025-05-29 00:58:23.240836 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/systemd.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-29 00:58:23.240840 | orchestrator | 2025-05-29 00:58:23.240844 | orchestrator | TASK [ceph-rgw : generate systemd unit file] *********************************** 2025-05-29 00:58:23.240847 | orchestrator | Thursday 29 May 2025 00:58:08 +0000 (0:00:00.803) 0:12:53.321 ********** 2025-05-29 00:58:23.240851 | orchestrator | changed: [testbed-node-3] 2025-05-29 00:58:23.240855 | orchestrator | changed: [testbed-node-4] 2025-05-29 00:58:23.240858 | orchestrator | changed: [testbed-node-5] 2025-05-29 00:58:23.240862 | orchestrator | 2025-05-29 00:58:23.240866 | orchestrator | TASK [ceph-rgw : generate systemd ceph-radosgw target file] ******************** 2025-05-29 00:58:23.240870 | orchestrator | Thursday 29 May 2025 00:58:09 +0000 (0:00:01.204) 0:12:54.525 ********** 2025-05-29 00:58:23.240876 | orchestrator | changed: [testbed-node-3] 2025-05-29 00:58:23.240880 | orchestrator | changed: [testbed-node-4] 2025-05-29 00:58:23.240883 | orchestrator | changed: [testbed-node-5] 2025-05-29 00:58:23.240887 | orchestrator | 2025-05-29 00:58:23.240891 | orchestrator | TASK [ceph-rgw : enable ceph-radosgw.target] *********************************** 2025-05-29 00:58:23.240895 | orchestrator | Thursday 29 May 2025 00:58:11 +0000 (0:00:01.167) 0:12:55.693 ********** 2025-05-29 00:58:23.240898 | orchestrator | changed: [testbed-node-3] 2025-05-29 00:58:23.240902 | orchestrator | changed: [testbed-node-4] 2025-05-29 00:58:23.240906 | orchestrator | changed: [testbed-node-5] 2025-05-29 00:58:23.240909 | orchestrator | 2025-05-29 00:58:23.240913 | orchestrator | TASK [ceph-rgw : systemd start rgw container] ********************************** 2025-05-29 00:58:23.240917 | orchestrator | Thursday 29 May 2025 00:58:13 +0000 (0:00:01.950) 0:12:57.644 ********** 2025-05-29 00:58:23.240920 | orchestrator | changed: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081}) 2025-05-29 00:58:23.240924 | orchestrator | changed: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081}) 2025-05-29 00:58:23.240928 | orchestrator | changed: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081}) 2025-05-29 00:58:23.240932 | orchestrator | 2025-05-29 00:58:23.240936 | orchestrator | TASK [ceph-rgw : include_tasks multisite/main.yml] ***************************** 2025-05-29 00:58:23.240939 | orchestrator | Thursday 29 May 2025 00:58:14 +0000 (0:00:01.967) 0:12:59.612 ********** 2025-05-29 00:58:23.240943 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.240947 | orchestrator | skipping: [testbed-node-4] 2025-05-29 00:58:23.240950 | orchestrator | skipping: [testbed-node-5] 2025-05-29 00:58:23.240954 | orchestrator | 2025-05-29 00:58:23.240958 | orchestrator | RUNNING HANDLER [ceph-handler : make tempdir for scripts] ********************** 2025-05-29 00:58:23.240962 | orchestrator | Thursday 29 May 2025 00:58:16 +0000 (0:00:01.222) 0:13:00.834 ********** 2025-05-29 00:58:23.240965 | orchestrator | changed: [testbed-node-3] 2025-05-29 00:58:23.240969 | orchestrator | changed: [testbed-node-4] 2025-05-29 00:58:23.240973 | orchestrator | changed: [testbed-node-5] 2025-05-29 00:58:23.240976 | orchestrator | 2025-05-29 00:58:23.240980 | orchestrator | RUNNING HANDLER [ceph-handler : rgws handler] ********************************** 2025-05-29 00:58:23.240984 | orchestrator | Thursday 29 May 2025 00:58:16 +0000 (0:00:00.715) 0:13:01.550 ********** 2025-05-29 00:58:23.240989 | orchestrator | included: /ansible/roles/ceph-handler/tasks/handler_rgws.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-29 00:58:23.240993 | orchestrator | 2025-05-29 00:58:23.240997 | orchestrator | RUNNING HANDLER [ceph-handler : set _rgw_handler_called before restart] ******** 2025-05-29 00:58:23.241001 | orchestrator | Thursday 29 May 2025 00:58:17 +0000 (0:00:00.811) 0:13:02.362 ********** 2025-05-29 00:58:23.241004 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:58:23.241008 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:58:23.241012 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:58:23.241015 | orchestrator | 2025-05-29 00:58:23.241019 | orchestrator | RUNNING HANDLER [ceph-handler : copy rgw restart script] *********************** 2025-05-29 00:58:23.241023 | orchestrator | Thursday 29 May 2025 00:58:18 +0000 (0:00:00.356) 0:13:02.718 ********** 2025-05-29 00:58:23.241027 | orchestrator | changed: [testbed-node-3] 2025-05-29 00:58:23.241030 | orchestrator | changed: [testbed-node-4] 2025-05-29 00:58:23.241034 | orchestrator | changed: [testbed-node-5] 2025-05-29 00:58:23.241038 | orchestrator | 2025-05-29 00:58:23.241041 | orchestrator | RUNNING HANDLER [ceph-handler : restart ceph rgw daemon(s)] ******************** 2025-05-29 00:58:23.241045 | orchestrator | Thursday 29 May 2025 00:58:19 +0000 (0:00:01.489) 0:13:04.207 ********** 2025-05-29 00:58:23.241049 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-29 00:58:23.241053 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-29 00:58:23.241056 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-29 00:58:23.241063 | orchestrator | skipping: [testbed-node-3] 2025-05-29 00:58:23.241067 | orchestrator | 2025-05-29 00:58:23.241070 | orchestrator | RUNNING HANDLER [ceph-handler : set _rgw_handler_called after restart] ********* 2025-05-29 00:58:23.241074 | orchestrator | Thursday 29 May 2025 00:58:20 +0000 (0:00:00.614) 0:13:04.822 ********** 2025-05-29 00:58:23.241078 | orchestrator | ok: [testbed-node-3] 2025-05-29 00:58:23.241082 | orchestrator | ok: [testbed-node-4] 2025-05-29 00:58:23.241085 | orchestrator | ok: [testbed-node-5] 2025-05-29 00:58:23.241089 | orchestrator | 2025-05-29 00:58:23.241093 | orchestrator | RUNNING HANDLER [ceph-handler : remove tempdir for scripts] ******************** 2025-05-29 00:58:23.241096 | orchestrator | Thursday 29 May 2025 00:58:20 +0000 (0:00:00.337) 0:13:05.159 ********** 2025-05-29 00:58:23.241100 | orchestrator | changed: [testbed-node-3] 2025-05-29 00:58:23.241104 | orchestrator | changed: [testbed-node-4] 2025-05-29 00:58:23.241107 | orchestrator | changed: [testbed-node-5] 2025-05-29 00:58:23.241111 | orchestrator | 2025-05-29 00:58:23.241117 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-29 00:58:23.241121 | orchestrator | testbed-node-0 : ok=131  changed=38  unreachable=0 failed=0 skipped=291  rescued=0 ignored=0 2025-05-29 00:58:23.241125 | orchestrator | testbed-node-1 : ok=119  changed=34  unreachable=0 failed=0 skipped=262  rescued=0 ignored=0 2025-05-29 00:58:23.241129 | orchestrator | testbed-node-2 : ok=126  changed=36  unreachable=0 failed=0 skipped=261  rescued=0 ignored=0 2025-05-29 00:58:23.241132 | orchestrator | testbed-node-3 : ok=175  changed=47  unreachable=0 failed=0 skipped=347  rescued=0 ignored=0 2025-05-29 00:58:23.241136 | orchestrator | testbed-node-4 : ok=164  changed=43  unreachable=0 failed=0 skipped=309  rescued=0 ignored=0 2025-05-29 00:58:23.241140 | orchestrator | testbed-node-5 : ok=166  changed=44  unreachable=0 failed=0 skipped=307  rescued=0 ignored=0 2025-05-29 00:58:23.241144 | orchestrator | 2025-05-29 00:58:23.241147 | orchestrator | 2025-05-29 00:58:23.241151 | orchestrator | 2025-05-29 00:58:23.241155 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-29 00:58:23.241159 | orchestrator | Thursday 29 May 2025 00:58:21 +0000 (0:00:01.301) 0:13:06.461 ********** 2025-05-29 00:58:23.241162 | orchestrator | =============================================================================== 2025-05-29 00:58:23.241166 | orchestrator | ceph-container-common : pulling registry.osism.tech/osism/ceph-daemon:17.2.7 image -- 45.31s 2025-05-29 00:58:23.241170 | orchestrator | ceph-osd : use ceph-volume to create bluestore osds -------------------- 44.32s 2025-05-29 00:58:23.241173 | orchestrator | ceph-rgw : create replicated pools for rgw ----------------------------- 26.05s 2025-05-29 00:58:23.241177 | orchestrator | ceph-mon : waiting for the monitor(s) to form the quorum... ------------ 21.41s 2025-05-29 00:58:23.241181 | orchestrator | ceph-mds : wait for mds socket to exist -------------------------------- 17.01s 2025-05-29 00:58:23.241184 | orchestrator | ceph-mgr : wait for all mgr to be up ----------------------------------- 13.35s 2025-05-29 00:58:23.241188 | orchestrator | ceph-osd : wait for all osd to be up ----------------------------------- 12.75s 2025-05-29 00:58:23.241192 | orchestrator | ceph-mgr : create ceph mgr keyring(s) on a mon node --------------------- 7.75s 2025-05-29 00:58:23.241195 | orchestrator | ceph-mon : fetch ceph initial keys -------------------------------------- 7.55s 2025-05-29 00:58:23.241199 | orchestrator | ceph-mds : create filesystem pools -------------------------------------- 6.93s 2025-05-29 00:58:23.241203 | orchestrator | ceph-mgr : disable ceph mgr enabled modules ----------------------------- 6.53s 2025-05-29 00:58:23.241206 | orchestrator | ceph-config : create ceph initial directories --------------------------- 6.08s 2025-05-29 00:58:23.241210 | orchestrator | ceph-facts : set_fact _monitor_addresses to monitor_address ------------- 5.12s 2025-05-29 00:58:23.241251 | orchestrator | ceph-mgr : add modules to ceph-mgr -------------------------------------- 5.08s 2025-05-29 00:58:23.241255 | orchestrator | ceph-config : generate ceph.conf configuration file --------------------- 5.03s 2025-05-29 00:58:23.241261 | orchestrator | ceph-crash : start the ceph-crash service ------------------------------- 4.78s 2025-05-29 00:58:23.241265 | orchestrator | ceph-crash : create client.crash keyring -------------------------------- 3.73s 2025-05-29 00:58:23.241269 | orchestrator | ceph-handler : remove tempdir for scripts ------------------------------- 3.50s 2025-05-29 00:58:23.241272 | orchestrator | ceph-osd : systemd start osd -------------------------------------------- 3.49s 2025-05-29 00:58:23.241276 | orchestrator | ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv4 --- 3.35s 2025-05-29 00:58:23.241280 | orchestrator | 2025-05-29 00:58:23 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:58:23.241284 | orchestrator | 2025-05-29 00:58:23 | INFO  | Task 2efa315a-60c7-4140-b3e7-09ffa1c9cbdf is in state STARTED 2025-05-29 00:58:23.241288 | orchestrator | 2025-05-29 00:58:23 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:58:26.256639 | orchestrator | 2025-05-29 00:58:26 | INFO  | Task 3ebb5bb4-9a6f-4abc-97a4-594e45e8c174 is in state STARTED 2025-05-29 00:58:26.258489 | orchestrator | 2025-05-29 00:58:26 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:58:26.259778 | orchestrator | 2025-05-29 00:58:26 | INFO  | Task 2efa315a-60c7-4140-b3e7-09ffa1c9cbdf is in state STARTED 2025-05-29 00:58:26.259814 | orchestrator | 2025-05-29 00:58:26 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:58:29.304850 | orchestrator | 2025-05-29 00:58:29 | INFO  | Task 3ebb5bb4-9a6f-4abc-97a4-594e45e8c174 is in state STARTED 2025-05-29 00:58:29.305341 | orchestrator | 2025-05-29 00:58:29 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:58:29.306778 | orchestrator | 2025-05-29 00:58:29 | INFO  | Task 2efa315a-60c7-4140-b3e7-09ffa1c9cbdf is in state STARTED 2025-05-29 00:58:29.306888 | orchestrator | 2025-05-29 00:58:29 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:58:32.351851 | orchestrator | 2025-05-29 00:58:32 | INFO  | Task 3ebb5bb4-9a6f-4abc-97a4-594e45e8c174 is in state STARTED 2025-05-29 00:58:32.352900 | orchestrator | 2025-05-29 00:58:32 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:58:32.357327 | orchestrator | 2025-05-29 00:58:32 | INFO  | Task 2efa315a-60c7-4140-b3e7-09ffa1c9cbdf is in state STARTED 2025-05-29 00:58:32.357386 | orchestrator | 2025-05-29 00:58:32 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:58:35.422519 | orchestrator | 2025-05-29 00:58:35 | INFO  | Task a4ac4d31-ac5f-4bcf-91ec-b1480c6a93df is in state STARTED 2025-05-29 00:58:35.424095 | orchestrator | 2025-05-29 00:58:35 | INFO  | Task 3ebb5bb4-9a6f-4abc-97a4-594e45e8c174 is in state STARTED 2025-05-29 00:58:35.426962 | orchestrator | 2025-05-29 00:58:35 | INFO  | Task 3b52cf61-14bc-4b8a-b709-911dcbb43beb is in state STARTED 2025-05-29 00:58:35.429368 | orchestrator | 2025-05-29 00:58:35 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:58:35.435259 | orchestrator | 2025-05-29 00:58:35 | INFO  | Task 2efa315a-60c7-4140-b3e7-09ffa1c9cbdf is in state SUCCESS 2025-05-29 00:58:35.436141 | orchestrator | 2025-05-29 00:58:35.436177 | orchestrator | 2025-05-29 00:58:35.436189 | orchestrator | PLAY [Set kolla_action_mariadb] ************************************************ 2025-05-29 00:58:35.436201 | orchestrator | 2025-05-29 00:58:35.436241 | orchestrator | TASK [Inform the user about the following task] ******************************** 2025-05-29 00:58:35.436253 | orchestrator | Thursday 29 May 2025 00:55:01 +0000 (0:00:00.166) 0:00:00.166 ********** 2025-05-29 00:58:35.436292 | orchestrator | ok: [localhost] => { 2025-05-29 00:58:35.436306 | orchestrator |  "msg": "The task 'Check MariaDB service' fails if the MariaDB service has not yet been deployed. This is fine." 2025-05-29 00:58:35.436317 | orchestrator | } 2025-05-29 00:58:35.436328 | orchestrator | 2025-05-29 00:58:35.436339 | orchestrator | TASK [Check MariaDB service] *************************************************** 2025-05-29 00:58:35.436350 | orchestrator | Thursday 29 May 2025 00:55:01 +0000 (0:00:00.044) 0:00:00.211 ********** 2025-05-29 00:58:35.436361 | orchestrator | fatal: [localhost]: FAILED! => {"changed": false, "elapsed": 2, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.9:3306"} 2025-05-29 00:58:35.436373 | orchestrator | ...ignoring 2025-05-29 00:58:35.436384 | orchestrator | 2025-05-29 00:58:35.436395 | orchestrator | TASK [Set kolla_action_mariadb = upgrade if MariaDB is already running] ******** 2025-05-29 00:58:35.436406 | orchestrator | Thursday 29 May 2025 00:55:04 +0000 (0:00:02.571) 0:00:02.782 ********** 2025-05-29 00:58:35.436416 | orchestrator | skipping: [localhost] 2025-05-29 00:58:35.436448 | orchestrator | 2025-05-29 00:58:35.436460 | orchestrator | TASK [Set kolla_action_mariadb = kolla_action_ng] ****************************** 2025-05-29 00:58:35.436470 | orchestrator | Thursday 29 May 2025 00:55:04 +0000 (0:00:00.081) 0:00:02.864 ********** 2025-05-29 00:58:35.436481 | orchestrator | ok: [localhost] 2025-05-29 00:58:35.436492 | orchestrator | 2025-05-29 00:58:35.436503 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-29 00:58:35.436513 | orchestrator | 2025-05-29 00:58:35.436524 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-29 00:58:35.436535 | orchestrator | Thursday 29 May 2025 00:55:04 +0000 (0:00:00.210) 0:00:03.075 ********** 2025-05-29 00:58:35.436545 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:58:35.436556 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:58:35.436566 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:58:35.436577 | orchestrator | 2025-05-29 00:58:35.436588 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-29 00:58:35.436598 | orchestrator | Thursday 29 May 2025 00:55:05 +0000 (0:00:00.444) 0:00:03.519 ********** 2025-05-29 00:58:35.436609 | orchestrator | ok: [testbed-node-0] => (item=enable_mariadb_True) 2025-05-29 00:58:35.436620 | orchestrator | ok: [testbed-node-1] => (item=enable_mariadb_True) 2025-05-29 00:58:35.436631 | orchestrator | ok: [testbed-node-2] => (item=enable_mariadb_True) 2025-05-29 00:58:35.436642 | orchestrator | 2025-05-29 00:58:35.436653 | orchestrator | PLAY [Apply role mariadb] ****************************************************** 2025-05-29 00:58:35.436663 | orchestrator | 2025-05-29 00:58:35.436674 | orchestrator | TASK [mariadb : Group MariaDB hosts based on shards] *************************** 2025-05-29 00:58:35.436685 | orchestrator | Thursday 29 May 2025 00:55:05 +0000 (0:00:00.466) 0:00:03.985 ********** 2025-05-29 00:58:35.436695 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-29 00:58:35.436706 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-05-29 00:58:35.436717 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-05-29 00:58:35.436728 | orchestrator | 2025-05-29 00:58:35.436740 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-05-29 00:58:35.436752 | orchestrator | Thursday 29 May 2025 00:55:06 +0000 (0:00:00.645) 0:00:04.631 ********** 2025-05-29 00:58:35.436765 | orchestrator | included: /ansible/roles/mariadb/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-29 00:58:35.436778 | orchestrator | 2025-05-29 00:58:35.436790 | orchestrator | TASK [mariadb : Ensuring config directories exist] ***************************** 2025-05-29 00:58:35.436802 | orchestrator | Thursday 29 May 2025 00:55:06 +0000 (0:00:00.660) 0:00:05.291 ********** 2025-05-29 00:58:35.436848 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-05-29 00:58:35.436877 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-05-29 00:58:35.436893 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-05-29 00:58:35.436914 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-05-29 00:58:35.436943 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-05-29 00:58:35.436960 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-05-29 00:58:35.436973 | orchestrator | 2025-05-29 00:58:35.436986 | orchestrator | TASK [mariadb : Ensuring database backup config directory exists] ************** 2025-05-29 00:58:35.436998 | orchestrator | Thursday 29 May 2025 00:55:11 +0000 (0:00:04.335) 0:00:09.627 ********** 2025-05-29 00:58:35.437011 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:35.437024 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:35.437037 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:58:35.437049 | orchestrator | 2025-05-29 00:58:35.437061 | orchestrator | TASK [mariadb : Copying over my.cnf for mariabackup] *************************** 2025-05-29 00:58:35.437075 | orchestrator | Thursday 29 May 2025 00:55:11 +0000 (0:00:00.599) 0:00:10.227 ********** 2025-05-29 00:58:35.437087 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:35.437097 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:35.437108 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:58:35.437119 | orchestrator | 2025-05-29 00:58:35.437130 | orchestrator | TASK [mariadb : Copying over config.json files for services] ******************* 2025-05-29 00:58:35.437147 | orchestrator | Thursday 29 May 2025 00:55:13 +0000 (0:00:01.512) 0:00:11.739 ********** 2025-05-29 00:58:35.437171 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-05-29 00:58:35.437184 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-05-29 00:58:35.437202 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-05-29 00:58:35.437248 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-05-29 00:58:35.437260 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-05-29 00:58:35.437272 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-05-29 00:58:35.437283 | orchestrator | 2025-05-29 00:58:35.437294 | orchestrator | TASK [mariadb : Copying over config.json files for mariabackup] **************** 2025-05-29 00:58:35.437305 | orchestrator | Thursday 29 May 2025 00:55:18 +0000 (0:00:05.176) 0:00:16.915 ********** 2025-05-29 00:58:35.437316 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:35.437327 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:35.437337 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:58:35.437355 | orchestrator | 2025-05-29 00:58:35.437366 | orchestrator | TASK [mariadb : Copying over galera.cnf] *************************************** 2025-05-29 00:58:35.437376 | orchestrator | Thursday 29 May 2025 00:55:19 +0000 (0:00:01.191) 0:00:18.107 ********** 2025-05-29 00:58:35.437387 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:58:35.437398 | orchestrator | changed: [testbed-node-2] 2025-05-29 00:58:35.437408 | orchestrator | changed: [testbed-node-1] 2025-05-29 00:58:35.437419 | orchestrator | 2025-05-29 00:58:35.437429 | orchestrator | TASK [mariadb : Check mariadb containers] ************************************** 2025-05-29 00:58:35.437440 | orchestrator | Thursday 29 May 2025 00:55:27 +0000 (0:00:07.721) 0:00:25.829 ********** 2025-05-29 00:58:35.437463 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-05-29 00:58:35.437476 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-05-29 00:58:35.437500 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb', 'value': {'container_name': 'mariadb', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-server:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/hosts:/etc/hosts:ro', '/etc/timezone:/etc/timezone:ro', 'mariadb:/var/lib/mysql', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/clustercheck'], 'timeout': '30'}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}, 'haproxy': {'mariadb': {'enabled': True, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s', 'option httpchk'], 'custom_member_list': [' server testbed-node-0 192.168.16.10:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 192.168.16.11:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 192.168.16.12:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}, 'mariadb_external_lb': {'enabled': False, 'mode': 'tcp', 'port': '3306', 'listen_port': '3306', 'frontend_tcp_extra': ['option clitcpka', 'timeout client 3600s'], 'backend_tcp_extra': ['option srvtcpka', 'timeout server 3600s'], 'custom_member_list': [' server testbed-node-0 testbed-node-0:3306 check port 4569 inter 2000 rise 2 fall 5', ' server testbed-node-1 testbed-node-1:3306 check port 4569 inter 2000 rise 2 fall 5 backup', ' server testbed-node-2 testbed-node-2:3306 check port 4569 inter 2000 rise 2 fall 5 backup', '']}}}}) 2025-05-29 00:58:35.437519 | orchestrator | changed: [testbed-node-0] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.10', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-05-29 00:58:35.437531 | orchestrator | changed: [testbed-node-1] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.11', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-05-29 00:58:35.437542 | orchestrator | changed: [testbed-node-2] => (item={'key': 'mariadb-clustercheck', 'value': {'container_name': 'mariadb_clustercheck', 'group': 'mariadb_shard_0', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/mariadb-clustercheck:10.11.10.20241206', 'volumes': ['/etc/kolla/mariadb-clustercheck/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'environment': {'MYSQL_USERNAME': 'monitor', 'MYSQL_PASSWORD': 'iek7ooth9miesodoh2ongohcaachah0I', 'MYSQL_HOST': '192.168.16.12', 'AVAILABLE_WHEN_DONOR': '1'}}}) 2025-05-29 00:58:35.437560 | orchestrator | 2025-05-29 00:58:35.437571 | orchestrator | TASK [mariadb : Create MariaDB volume] ***************************************** 2025-05-29 00:58:35.437582 | orchestrator | Thursday 29 May 2025 00:55:31 +0000 (0:00:03.509) 0:00:29.338 ********** 2025-05-29 00:58:35.437593 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:58:35.437604 | orchestrator | changed: [testbed-node-1] 2025-05-29 00:58:35.437614 | orchestrator | changed: [testbed-node-2] 2025-05-29 00:58:35.437625 | orchestrator | 2025-05-29 00:58:35.437635 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB volume availability] ************* 2025-05-29 00:58:35.437646 | orchestrator | Thursday 29 May 2025 00:55:32 +0000 (0:00:01.157) 0:00:30.495 ********** 2025-05-29 00:58:35.437657 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:58:35.437667 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:58:35.437678 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:58:35.437688 | orchestrator | 2025-05-29 00:58:35.437699 | orchestrator | TASK [mariadb : Establish whether the cluster has already existed] ************* 2025-05-29 00:58:35.437710 | orchestrator | Thursday 29 May 2025 00:55:32 +0000 (0:00:00.589) 0:00:31.084 ********** 2025-05-29 00:58:35.437720 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:58:35.437731 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:58:35.437741 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:58:35.437751 | orchestrator | 2025-05-29 00:58:35.437762 | orchestrator | TASK [mariadb : Check MariaDB service port liveness] *************************** 2025-05-29 00:58:35.437773 | orchestrator | Thursday 29 May 2025 00:55:33 +0000 (0:00:00.382) 0:00:31.467 ********** 2025-05-29 00:58:35.437784 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.10:3306"} 2025-05-29 00:58:35.437795 | orchestrator | ...ignoring 2025-05-29 00:58:35.437817 | orchestrator | fatal: [testbed-node-1]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.11:3306"} 2025-05-29 00:58:35.437828 | orchestrator | ...ignoring 2025-05-29 00:58:35.437838 | orchestrator | fatal: [testbed-node-2]: FAILED! => {"changed": false, "elapsed": 10, "msg": "Timeout when waiting for search string MariaDB in 192.168.16.12:3306"} 2025-05-29 00:58:35.437849 | orchestrator | ...ignoring 2025-05-29 00:58:35.437859 | orchestrator | 2025-05-29 00:58:35.437870 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service port liveness] *********** 2025-05-29 00:58:35.437881 | orchestrator | Thursday 29 May 2025 00:55:44 +0000 (0:00:11.186) 0:00:42.653 ********** 2025-05-29 00:58:35.437891 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:58:35.437902 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:58:35.437912 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:58:35.437923 | orchestrator | 2025-05-29 00:58:35.437933 | orchestrator | TASK [mariadb : Fail on existing but stopped cluster] ************************** 2025-05-29 00:58:35.437944 | orchestrator | Thursday 29 May 2025 00:55:44 +0000 (0:00:00.620) 0:00:43.273 ********** 2025-05-29 00:58:35.437955 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:35.437965 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:35.437976 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:35.437986 | orchestrator | 2025-05-29 00:58:35.437997 | orchestrator | TASK [mariadb : Check MariaDB service WSREP sync status] *********************** 2025-05-29 00:58:35.438007 | orchestrator | Thursday 29 May 2025 00:55:45 +0000 (0:00:00.699) 0:00:43.972 ********** 2025-05-29 00:58:35.438067 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:35.438081 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:35.438092 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:35.438103 | orchestrator | 2025-05-29 00:58:35.438120 | orchestrator | TASK [mariadb : Extract MariaDB service WSREP sync status] ********************* 2025-05-29 00:58:35.438131 | orchestrator | Thursday 29 May 2025 00:55:46 +0000 (0:00:00.500) 0:00:44.473 ********** 2025-05-29 00:58:35.438142 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:35.438152 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:35.438170 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:35.438181 | orchestrator | 2025-05-29 00:58:35.438192 | orchestrator | TASK [mariadb : Divide hosts by their MariaDB service WSREP sync status] ******* 2025-05-29 00:58:35.438219 | orchestrator | Thursday 29 May 2025 00:55:46 +0000 (0:00:00.651) 0:00:45.125 ********** 2025-05-29 00:58:35.438231 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:58:35.438242 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:58:35.438252 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:58:35.438263 | orchestrator | 2025-05-29 00:58:35.438273 | orchestrator | TASK [mariadb : Fail when MariaDB services are not synced across the whole cluster] *** 2025-05-29 00:58:35.438284 | orchestrator | Thursday 29 May 2025 00:55:47 +0000 (0:00:00.566) 0:00:45.692 ********** 2025-05-29 00:58:35.438294 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:35.438305 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:35.438315 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:35.438326 | orchestrator | 2025-05-29 00:58:35.438336 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-05-29 00:58:35.438347 | orchestrator | Thursday 29 May 2025 00:55:47 +0000 (0:00:00.609) 0:00:46.301 ********** 2025-05-29 00:58:35.438357 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:35.438368 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:35.438379 | orchestrator | included: /ansible/roles/mariadb/tasks/bootstrap_cluster.yml for testbed-node-0 2025-05-29 00:58:35.438389 | orchestrator | 2025-05-29 00:58:35.438400 | orchestrator | TASK [mariadb : Running MariaDB bootstrap container] *************************** 2025-05-29 00:58:35.438410 | orchestrator | Thursday 29 May 2025 00:55:48 +0000 (0:00:00.570) 0:00:46.871 ********** 2025-05-29 00:58:35.438421 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:58:35.438431 | orchestrator | 2025-05-29 00:58:35.438442 | orchestrator | TASK [mariadb : Store bootstrap host name into facts] ************************** 2025-05-29 00:58:35.438452 | orchestrator | Thursday 29 May 2025 00:55:59 +0000 (0:00:10.567) 0:00:57.439 ********** 2025-05-29 00:58:35.438463 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:58:35.438473 | orchestrator | 2025-05-29 00:58:35.438484 | orchestrator | TASK [mariadb : include_tasks] ************************************************* 2025-05-29 00:58:35.438494 | orchestrator | Thursday 29 May 2025 00:55:59 +0000 (0:00:00.128) 0:00:57.568 ********** 2025-05-29 00:58:35.438505 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:35.438515 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:35.438526 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:35.438536 | orchestrator | 2025-05-29 00:58:35.438547 | orchestrator | RUNNING HANDLER [mariadb : Starting first MariaDB container] ******************* 2025-05-29 00:58:35.438557 | orchestrator | Thursday 29 May 2025 00:56:01 +0000 (0:00:01.808) 0:00:59.377 ********** 2025-05-29 00:58:35.438568 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:58:35.438578 | orchestrator | 2025-05-29 00:58:35.438589 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service port liveness] ******* 2025-05-29 00:58:35.438600 | orchestrator | Thursday 29 May 2025 00:56:12 +0000 (0:00:11.833) 0:01:11.210 ********** 2025-05-29 00:58:35.438610 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Wait for first MariaDB service port liveness (10 retries left). 2025-05-29 00:58:35.438621 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:58:35.438632 | orchestrator | 2025-05-29 00:58:35.438642 | orchestrator | RUNNING HANDLER [mariadb : Wait for first MariaDB service to sync WSREP] ******* 2025-05-29 00:58:35.438653 | orchestrator | Thursday 29 May 2025 00:56:20 +0000 (0:00:07.358) 0:01:18.569 ********** 2025-05-29 00:58:35.438663 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:58:35.438674 | orchestrator | 2025-05-29 00:58:35.438684 | orchestrator | RUNNING HANDLER [mariadb : Ensure MariaDB is running normally on bootstrap host] *** 2025-05-29 00:58:35.438695 | orchestrator | Thursday 29 May 2025 00:56:22 +0000 (0:00:02.646) 0:01:21.216 ********** 2025-05-29 00:58:35.438705 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:58:35.438716 | orchestrator | 2025-05-29 00:58:35.438727 | orchestrator | RUNNING HANDLER [mariadb : Restart MariaDB on existing cluster members] ******** 2025-05-29 00:58:35.438747 | orchestrator | Thursday 29 May 2025 00:56:23 +0000 (0:00:00.127) 0:01:21.343 ********** 2025-05-29 00:58:35.438757 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:35.438768 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:35.438779 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:35.438789 | orchestrator | 2025-05-29 00:58:35.438805 | orchestrator | RUNNING HANDLER [mariadb : Start MariaDB on new nodes] ************************* 2025-05-29 00:58:35.438816 | orchestrator | Thursday 29 May 2025 00:56:23 +0000 (0:00:00.485) 0:01:21.829 ********** 2025-05-29 00:58:35.438827 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:35.438838 | orchestrator | changed: [testbed-node-1] 2025-05-29 00:58:35.438848 | orchestrator | changed: [testbed-node-2] 2025-05-29 00:58:35.438858 | orchestrator | 2025-05-29 00:58:35.438869 | orchestrator | RUNNING HANDLER [mariadb : Restart mariadb-clustercheck container] ************* 2025-05-29 00:58:35.438880 | orchestrator | Thursday 29 May 2025 00:56:23 +0000 (0:00:00.464) 0:01:22.293 ********** 2025-05-29 00:58:35.438890 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: mariadb_restart 2025-05-29 00:58:35.438901 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:58:35.438912 | orchestrator | changed: [testbed-node-2] 2025-05-29 00:58:35.438922 | orchestrator | changed: [testbed-node-1] 2025-05-29 00:58:35.438932 | orchestrator | 2025-05-29 00:58:35.438943 | orchestrator | PLAY [Restart mariadb services] ************************************************ 2025-05-29 00:58:35.438954 | orchestrator | skipping: no hosts matched 2025-05-29 00:58:35.438964 | orchestrator | 2025-05-29 00:58:35.438975 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-05-29 00:58:35.438992 | orchestrator | 2025-05-29 00:58:35.439010 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-05-29 00:58:35.439028 | orchestrator | Thursday 29 May 2025 00:56:42 +0000 (0:00:18.207) 0:01:40.501 ********** 2025-05-29 00:58:35.439055 | orchestrator | changed: [testbed-node-1] 2025-05-29 00:58:35.439075 | orchestrator | 2025-05-29 00:58:35.439104 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-05-29 00:58:35.439124 | orchestrator | Thursday 29 May 2025 00:56:56 +0000 (0:00:13.905) 0:01:54.407 ********** 2025-05-29 00:58:35.439142 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:58:35.439160 | orchestrator | 2025-05-29 00:58:35.439178 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-05-29 00:58:35.439195 | orchestrator | Thursday 29 May 2025 00:57:16 +0000 (0:00:20.578) 0:02:14.985 ********** 2025-05-29 00:58:35.439241 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:58:35.439260 | orchestrator | 2025-05-29 00:58:35.439278 | orchestrator | PLAY [Start mariadb services] ************************************************** 2025-05-29 00:58:35.439297 | orchestrator | 2025-05-29 00:58:35.439315 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-05-29 00:58:35.439333 | orchestrator | Thursday 29 May 2025 00:57:19 +0000 (0:00:02.640) 0:02:17.625 ********** 2025-05-29 00:58:35.439352 | orchestrator | changed: [testbed-node-2] 2025-05-29 00:58:35.439371 | orchestrator | 2025-05-29 00:58:35.439391 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-05-29 00:58:35.439410 | orchestrator | Thursday 29 May 2025 00:57:40 +0000 (0:00:20.808) 0:02:38.434 ********** 2025-05-29 00:58:35.439429 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:58:35.439447 | orchestrator | 2025-05-29 00:58:35.439465 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-05-29 00:58:35.439483 | orchestrator | Thursday 29 May 2025 00:57:55 +0000 (0:00:15.566) 0:02:54.000 ********** 2025-05-29 00:58:35.439497 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:58:35.439508 | orchestrator | 2025-05-29 00:58:35.439519 | orchestrator | PLAY [Restart bootstrap mariadb service] *************************************** 2025-05-29 00:58:35.439529 | orchestrator | 2025-05-29 00:58:35.439540 | orchestrator | TASK [mariadb : Restart MariaDB container] ************************************* 2025-05-29 00:58:35.439551 | orchestrator | Thursday 29 May 2025 00:57:58 +0000 (0:00:02.543) 0:02:56.544 ********** 2025-05-29 00:58:35.439573 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:58:35.439583 | orchestrator | 2025-05-29 00:58:35.439594 | orchestrator | TASK [mariadb : Wait for MariaDB service port liveness] ************************ 2025-05-29 00:58:35.439605 | orchestrator | Thursday 29 May 2025 00:58:10 +0000 (0:00:12.651) 0:03:09.195 ********** 2025-05-29 00:58:35.439616 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:58:35.439627 | orchestrator | 2025-05-29 00:58:35.439638 | orchestrator | TASK [mariadb : Wait for MariaDB service to sync WSREP] ************************ 2025-05-29 00:58:35.439649 | orchestrator | Thursday 29 May 2025 00:58:15 +0000 (0:00:04.606) 0:03:13.802 ********** 2025-05-29 00:58:35.439659 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:58:35.439670 | orchestrator | 2025-05-29 00:58:35.439681 | orchestrator | PLAY [Apply mariadb post-configuration] **************************************** 2025-05-29 00:58:35.439692 | orchestrator | 2025-05-29 00:58:35.439702 | orchestrator | TASK [Include mariadb post-deploy.yml] ***************************************** 2025-05-29 00:58:35.439713 | orchestrator | Thursday 29 May 2025 00:58:17 +0000 (0:00:02.534) 0:03:16.336 ********** 2025-05-29 00:58:35.439724 | orchestrator | included: mariadb for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-29 00:58:35.439735 | orchestrator | 2025-05-29 00:58:35.439746 | orchestrator | TASK [mariadb : Creating shard root mysql user] ******************************** 2025-05-29 00:58:35.439756 | orchestrator | Thursday 29 May 2025 00:58:18 +0000 (0:00:00.752) 0:03:17.089 ********** 2025-05-29 00:58:35.439767 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:35.439778 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:35.439789 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:58:35.439800 | orchestrator | 2025-05-29 00:58:35.439810 | orchestrator | TASK [mariadb : Creating mysql monitor user] *********************************** 2025-05-29 00:58:35.439821 | orchestrator | Thursday 29 May 2025 00:58:21 +0000 (0:00:02.741) 0:03:19.830 ********** 2025-05-29 00:58:35.439832 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:35.439843 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:35.439854 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:58:35.439864 | orchestrator | 2025-05-29 00:58:35.439875 | orchestrator | TASK [mariadb : Creating database backup user and setting permissions] ********* 2025-05-29 00:58:35.439886 | orchestrator | Thursday 29 May 2025 00:58:23 +0000 (0:00:02.271) 0:03:22.102 ********** 2025-05-29 00:58:35.439896 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:35.439907 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:35.439918 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:58:35.439928 | orchestrator | 2025-05-29 00:58:35.439940 | orchestrator | TASK [mariadb : Granting permissions on Mariabackup database to backup user] *** 2025-05-29 00:58:35.439958 | orchestrator | Thursday 29 May 2025 00:58:26 +0000 (0:00:02.539) 0:03:24.641 ********** 2025-05-29 00:58:35.439984 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:35.440003 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:35.440021 | orchestrator | changed: [testbed-node-0] 2025-05-29 00:58:35.440039 | orchestrator | 2025-05-29 00:58:35.440051 | orchestrator | TASK [mariadb : Wait for MariaDB service to be ready through VIP] ************** 2025-05-29 00:58:35.440062 | orchestrator | Thursday 29 May 2025 00:58:28 +0000 (0:00:02.257) 0:03:26.898 ********** 2025-05-29 00:58:35.440073 | orchestrator | ok: [testbed-node-0] 2025-05-29 00:58:35.440085 | orchestrator | ok: [testbed-node-2] 2025-05-29 00:58:35.440104 | orchestrator | ok: [testbed-node-1] 2025-05-29 00:58:35.440122 | orchestrator | 2025-05-29 00:58:35.440141 | orchestrator | TASK [Include mariadb post-upgrade.yml] **************************************** 2025-05-29 00:58:35.440161 | orchestrator | Thursday 29 May 2025 00:58:32 +0000 (0:00:03.512) 0:03:30.411 ********** 2025-05-29 00:58:35.440176 | orchestrator | skipping: [testbed-node-0] 2025-05-29 00:58:35.440187 | orchestrator | skipping: [testbed-node-1] 2025-05-29 00:58:35.440198 | orchestrator | skipping: [testbed-node-2] 2025-05-29 00:58:35.440248 | orchestrator | 2025-05-29 00:58:35.440260 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-29 00:58:35.440271 | orchestrator | localhost : ok=3  changed=0 unreachable=0 failed=0 skipped=1  rescued=0 ignored=1  2025-05-29 00:58:35.440292 | orchestrator | testbed-node-0 : ok=34  changed=17  unreachable=0 failed=0 skipped=8  rescued=0 ignored=1  2025-05-29 00:58:35.440315 | orchestrator | testbed-node-1 : ok=20  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=1  2025-05-29 00:58:35.440327 | orchestrator | testbed-node-2 : ok=20  changed=8  unreachable=0 failed=0 skipped=15  rescued=0 ignored=1  2025-05-29 00:58:35.440338 | orchestrator | 2025-05-29 00:58:35.440348 | orchestrator | 2025-05-29 00:58:35.440359 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-29 00:58:35.440370 | orchestrator | Thursday 29 May 2025 00:58:32 +0000 (0:00:00.377) 0:03:30.789 ********** 2025-05-29 00:58:35.440381 | orchestrator | =============================================================================== 2025-05-29 00:58:35.440391 | orchestrator | mariadb : Wait for MariaDB service port liveness ----------------------- 36.14s 2025-05-29 00:58:35.440402 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 34.71s 2025-05-29 00:58:35.440413 | orchestrator | mariadb : Restart mariadb-clustercheck container ----------------------- 18.21s 2025-05-29 00:58:35.440423 | orchestrator | mariadb : Restart MariaDB container ------------------------------------ 12.65s 2025-05-29 00:58:35.440434 | orchestrator | mariadb : Starting first MariaDB container ----------------------------- 11.83s 2025-05-29 00:58:35.440445 | orchestrator | mariadb : Check MariaDB service port liveness -------------------------- 11.19s 2025-05-29 00:58:35.440455 | orchestrator | mariadb : Running MariaDB bootstrap container -------------------------- 10.57s 2025-05-29 00:58:35.440466 | orchestrator | mariadb : Copying over galera.cnf --------------------------------------- 7.72s 2025-05-29 00:58:35.440477 | orchestrator | mariadb : Wait for first MariaDB service port liveness ------------------ 7.36s 2025-05-29 00:58:35.440487 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 5.18s 2025-05-29 00:58:35.440498 | orchestrator | mariadb : Copying over config.json files for services ------------------- 5.18s 2025-05-29 00:58:35.440508 | orchestrator | mariadb : Wait for MariaDB service port liveness ------------------------ 4.61s 2025-05-29 00:58:35.440519 | orchestrator | mariadb : Ensuring config directories exist ----------------------------- 4.34s 2025-05-29 00:58:35.440530 | orchestrator | mariadb : Wait for MariaDB service to be ready through VIP -------------- 3.51s 2025-05-29 00:58:35.440540 | orchestrator | mariadb : Check mariadb containers -------------------------------------- 3.51s 2025-05-29 00:58:35.440551 | orchestrator | mariadb : Creating shard root mysql user -------------------------------- 2.74s 2025-05-29 00:58:35.440561 | orchestrator | mariadb : Wait for first MariaDB service to sync WSREP ------------------ 2.65s 2025-05-29 00:58:35.440572 | orchestrator | Check MariaDB service --------------------------------------------------- 2.57s 2025-05-29 00:58:35.440583 | orchestrator | mariadb : Creating database backup user and setting permissions --------- 2.54s 2025-05-29 00:58:35.440593 | orchestrator | mariadb : Wait for MariaDB service to sync WSREP ------------------------ 2.53s 2025-05-29 00:58:35.440604 | orchestrator | 2025-05-29 00:58:35 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:58:38.491915 | orchestrator | 2025-05-29 00:58:38 | INFO  | Task a4ac4d31-ac5f-4bcf-91ec-b1480c6a93df is in state STARTED 2025-05-29 00:58:38.492011 | orchestrator | 2025-05-29 00:58:38 | INFO  | Task 3ebb5bb4-9a6f-4abc-97a4-594e45e8c174 is in state STARTED 2025-05-29 00:58:38.493187 | orchestrator | 2025-05-29 00:58:38 | INFO  | Task 3b52cf61-14bc-4b8a-b709-911dcbb43beb is in state STARTED 2025-05-29 00:58:38.494665 | orchestrator | 2025-05-29 00:58:38 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:58:38.494687 | orchestrator | 2025-05-29 00:58:38 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:58:41.551249 | orchestrator | 2025-05-29 00:58:41 | INFO  | Task a4ac4d31-ac5f-4bcf-91ec-b1480c6a93df is in state STARTED 2025-05-29 00:58:41.551933 | orchestrator | 2025-05-29 00:58:41 | INFO  | Task 3ebb5bb4-9a6f-4abc-97a4-594e45e8c174 is in state STARTED 2025-05-29 00:58:41.553074 | orchestrator | 2025-05-29 00:58:41 | INFO  | Task 3b52cf61-14bc-4b8a-b709-911dcbb43beb is in state STARTED 2025-05-29 00:58:41.554272 | orchestrator | 2025-05-29 00:58:41 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:58:41.554302 | orchestrator | 2025-05-29 00:58:41 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:58:44.606371 | orchestrator | 2025-05-29 00:58:44 | INFO  | Task a4ac4d31-ac5f-4bcf-91ec-b1480c6a93df is in state STARTED 2025-05-29 00:58:44.606479 | orchestrator | 2025-05-29 00:58:44 | INFO  | Task 3ebb5bb4-9a6f-4abc-97a4-594e45e8c174 is in state STARTED 2025-05-29 00:58:44.606946 | orchestrator | 2025-05-29 00:58:44 | INFO  | Task 3b52cf61-14bc-4b8a-b709-911dcbb43beb is in state STARTED 2025-05-29 00:58:44.610496 | orchestrator | 2025-05-29 00:58:44 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:58:44.611532 | orchestrator | 2025-05-29 00:58:44 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:58:47.644514 | orchestrator | 2025-05-29 00:58:47 | INFO  | Task a4ac4d31-ac5f-4bcf-91ec-b1480c6a93df is in state STARTED 2025-05-29 00:58:47.644873 | orchestrator | 2025-05-29 00:58:47 | INFO  | Task 3ebb5bb4-9a6f-4abc-97a4-594e45e8c174 is in state STARTED 2025-05-29 00:58:47.645348 | orchestrator | 2025-05-29 00:58:47 | INFO  | Task 3b52cf61-14bc-4b8a-b709-911dcbb43beb is in state STARTED 2025-05-29 00:58:47.646224 | orchestrator | 2025-05-29 00:58:47 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:58:47.646255 | orchestrator | 2025-05-29 00:58:47 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:58:50.675563 | orchestrator | 2025-05-29 00:58:50 | INFO  | Task a4ac4d31-ac5f-4bcf-91ec-b1480c6a93df is in state STARTED 2025-05-29 00:58:50.676574 | orchestrator | 2025-05-29 00:58:50 | INFO  | Task 3ebb5bb4-9a6f-4abc-97a4-594e45e8c174 is in state STARTED 2025-05-29 00:58:50.676612 | orchestrator | 2025-05-29 00:58:50 | INFO  | Task 3b52cf61-14bc-4b8a-b709-911dcbb43beb is in state STARTED 2025-05-29 00:58:50.676643 | orchestrator | 2025-05-29 00:58:50 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:58:50.676657 | orchestrator | 2025-05-29 00:58:50 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:58:53.743468 | orchestrator | 2025-05-29 00:58:53 | INFO  | Task a4ac4d31-ac5f-4bcf-91ec-b1480c6a93df is in state STARTED 2025-05-29 00:58:53.744211 | orchestrator | 2025-05-29 00:58:53 | INFO  | Task 3ebb5bb4-9a6f-4abc-97a4-594e45e8c174 is in state STARTED 2025-05-29 00:58:53.745018 | orchestrator | 2025-05-29 00:58:53 | INFO  | Task 3b52cf61-14bc-4b8a-b709-911dcbb43beb is in state STARTED 2025-05-29 00:58:53.746069 | orchestrator | 2025-05-29 00:58:53 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:58:53.746213 | orchestrator | 2025-05-29 00:58:53 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:58:56.779081 | orchestrator | 2025-05-29 00:58:56 | INFO  | Task a4ac4d31-ac5f-4bcf-91ec-b1480c6a93df is in state STARTED 2025-05-29 00:58:56.779560 | orchestrator | 2025-05-29 00:58:56 | INFO  | Task 3ebb5bb4-9a6f-4abc-97a4-594e45e8c174 is in state STARTED 2025-05-29 00:58:56.781253 | orchestrator | 2025-05-29 00:58:56 | INFO  | Task 3b52cf61-14bc-4b8a-b709-911dcbb43beb is in state STARTED 2025-05-29 00:58:56.782001 | orchestrator | 2025-05-29 00:58:56 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:58:56.782078 | orchestrator | 2025-05-29 00:58:56 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:58:59.819586 | orchestrator | 2025-05-29 00:58:59 | INFO  | Task a4ac4d31-ac5f-4bcf-91ec-b1480c6a93df is in state STARTED 2025-05-29 00:58:59.822485 | orchestrator | 2025-05-29 00:58:59 | INFO  | Task 3ebb5bb4-9a6f-4abc-97a4-594e45e8c174 is in state STARTED 2025-05-29 00:58:59.824062 | orchestrator | 2025-05-29 00:58:59 | INFO  | Task 3b52cf61-14bc-4b8a-b709-911dcbb43beb is in state STARTED 2025-05-29 00:58:59.825812 | orchestrator | 2025-05-29 00:58:59 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:58:59.825830 | orchestrator | 2025-05-29 00:58:59 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:59:02.867365 | orchestrator | 2025-05-29 00:59:02 | INFO  | Task a4ac4d31-ac5f-4bcf-91ec-b1480c6a93df is in state STARTED 2025-05-29 00:59:02.867884 | orchestrator | 2025-05-29 00:59:02 | INFO  | Task 3ebb5bb4-9a6f-4abc-97a4-594e45e8c174 is in state STARTED 2025-05-29 00:59:02.869753 | orchestrator | 2025-05-29 00:59:02 | INFO  | Task 3b52cf61-14bc-4b8a-b709-911dcbb43beb is in state STARTED 2025-05-29 00:59:02.873655 | orchestrator | 2025-05-29 00:59:02 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:59:02.873689 | orchestrator | 2025-05-29 00:59:02 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:59:05.911242 | orchestrator | 2025-05-29 00:59:05 | INFO  | Task a4ac4d31-ac5f-4bcf-91ec-b1480c6a93df is in state STARTED 2025-05-29 00:59:05.911913 | orchestrator | 2025-05-29 00:59:05 | INFO  | Task 3ebb5bb4-9a6f-4abc-97a4-594e45e8c174 is in state STARTED 2025-05-29 00:59:05.912900 | orchestrator | 2025-05-29 00:59:05 | INFO  | Task 3b52cf61-14bc-4b8a-b709-911dcbb43beb is in state STARTED 2025-05-29 00:59:05.913797 | orchestrator | 2025-05-29 00:59:05 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:59:05.914139 | orchestrator | 2025-05-29 00:59:05 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:59:08.955564 | orchestrator | 2025-05-29 00:59:08 | INFO  | Task a4ac4d31-ac5f-4bcf-91ec-b1480c6a93df is in state STARTED 2025-05-29 00:59:08.956259 | orchestrator | 2025-05-29 00:59:08 | INFO  | Task 3ebb5bb4-9a6f-4abc-97a4-594e45e8c174 is in state STARTED 2025-05-29 00:59:08.956968 | orchestrator | 2025-05-29 00:59:08 | INFO  | Task 3b52cf61-14bc-4b8a-b709-911dcbb43beb is in state STARTED 2025-05-29 00:59:08.957624 | orchestrator | 2025-05-29 00:59:08 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:59:08.958341 | orchestrator | 2025-05-29 00:59:08 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:59:11.995150 | orchestrator | 2025-05-29 00:59:11 | INFO  | Task a4ac4d31-ac5f-4bcf-91ec-b1480c6a93df is in state STARTED 2025-05-29 00:59:11.996791 | orchestrator | 2025-05-29 00:59:11 | INFO  | Task 3ebb5bb4-9a6f-4abc-97a4-594e45e8c174 is in state STARTED 2025-05-29 00:59:11.998717 | orchestrator | 2025-05-29 00:59:11 | INFO  | Task 3b52cf61-14bc-4b8a-b709-911dcbb43beb is in state STARTED 2025-05-29 00:59:12.000429 | orchestrator | 2025-05-29 00:59:12 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:59:12.000466 | orchestrator | 2025-05-29 00:59:12 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:59:15.048168 | orchestrator | 2025-05-29 00:59:15 | INFO  | Task a4ac4d31-ac5f-4bcf-91ec-b1480c6a93df is in state STARTED 2025-05-29 00:59:15.048380 | orchestrator | 2025-05-29 00:59:15 | INFO  | Task 3ebb5bb4-9a6f-4abc-97a4-594e45e8c174 is in state STARTED 2025-05-29 00:59:15.049248 | orchestrator | 2025-05-29 00:59:15 | INFO  | Task 3b52cf61-14bc-4b8a-b709-911dcbb43beb is in state STARTED 2025-05-29 00:59:15.050466 | orchestrator | 2025-05-29 00:59:15 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:59:15.050491 | orchestrator | 2025-05-29 00:59:15 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:59:18.106280 | orchestrator | 2025-05-29 00:59:18 | INFO  | Task a4ac4d31-ac5f-4bcf-91ec-b1480c6a93df is in state STARTED 2025-05-29 00:59:18.107581 | orchestrator | 2025-05-29 00:59:18 | INFO  | Task 3ebb5bb4-9a6f-4abc-97a4-594e45e8c174 is in state STARTED 2025-05-29 00:59:18.110377 | orchestrator | 2025-05-29 00:59:18 | INFO  | Task 3b52cf61-14bc-4b8a-b709-911dcbb43beb is in state STARTED 2025-05-29 00:59:18.112201 | orchestrator | 2025-05-29 00:59:18 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:59:18.112228 | orchestrator | 2025-05-29 00:59:18 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:59:21.169016 | orchestrator | 2025-05-29 00:59:21 | INFO  | Task a4ac4d31-ac5f-4bcf-91ec-b1480c6a93df is in state STARTED 2025-05-29 00:59:21.170528 | orchestrator | 2025-05-29 00:59:21 | INFO  | Task 3ebb5bb4-9a6f-4abc-97a4-594e45e8c174 is in state STARTED 2025-05-29 00:59:21.171193 | orchestrator | 2025-05-29 00:59:21 | INFO  | Task 3b52cf61-14bc-4b8a-b709-911dcbb43beb is in state STARTED 2025-05-29 00:59:21.172422 | orchestrator | 2025-05-29 00:59:21 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:59:21.172447 | orchestrator | 2025-05-29 00:59:21 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:59:24.225599 | orchestrator | 2025-05-29 00:59:24 | INFO  | Task a4ac4d31-ac5f-4bcf-91ec-b1480c6a93df is in state STARTED 2025-05-29 00:59:24.226711 | orchestrator | 2025-05-29 00:59:24 | INFO  | Task 3ebb5bb4-9a6f-4abc-97a4-594e45e8c174 is in state STARTED 2025-05-29 00:59:24.227893 | orchestrator | 2025-05-29 00:59:24 | INFO  | Task 3b52cf61-14bc-4b8a-b709-911dcbb43beb is in state STARTED 2025-05-29 00:59:24.229053 | orchestrator | 2025-05-29 00:59:24 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:59:24.229077 | orchestrator | 2025-05-29 00:59:24 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:59:27.281853 | orchestrator | 2025-05-29 00:59:27 | INFO  | Task a4ac4d31-ac5f-4bcf-91ec-b1480c6a93df is in state STARTED 2025-05-29 00:59:27.286300 | orchestrator | 2025-05-29 00:59:27 | INFO  | Task 3ebb5bb4-9a6f-4abc-97a4-594e45e8c174 is in state STARTED 2025-05-29 00:59:27.288419 | orchestrator | 2025-05-29 00:59:27 | INFO  | Task 3b52cf61-14bc-4b8a-b709-911dcbb43beb is in state STARTED 2025-05-29 00:59:27.290219 | orchestrator | 2025-05-29 00:59:27 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:59:27.290247 | orchestrator | 2025-05-29 00:59:27 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:59:30.343849 | orchestrator | 2025-05-29 00:59:30 | INFO  | Task a4ac4d31-ac5f-4bcf-91ec-b1480c6a93df is in state STARTED 2025-05-29 00:59:30.345195 | orchestrator | 2025-05-29 00:59:30 | INFO  | Task 3ebb5bb4-9a6f-4abc-97a4-594e45e8c174 is in state STARTED 2025-05-29 00:59:30.346761 | orchestrator | 2025-05-29 00:59:30 | INFO  | Task 3b52cf61-14bc-4b8a-b709-911dcbb43beb is in state STARTED 2025-05-29 00:59:30.350050 | orchestrator | 2025-05-29 00:59:30 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:59:30.350072 | orchestrator | 2025-05-29 00:59:30 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:59:33.390317 | orchestrator | 2025-05-29 00:59:33 | INFO  | Task a4ac4d31-ac5f-4bcf-91ec-b1480c6a93df is in state STARTED 2025-05-29 00:59:33.392309 | orchestrator | 2025-05-29 00:59:33 | INFO  | Task 3ebb5bb4-9a6f-4abc-97a4-594e45e8c174 is in state STARTED 2025-05-29 00:59:33.392339 | orchestrator | 2025-05-29 00:59:33 | INFO  | Task 3b52cf61-14bc-4b8a-b709-911dcbb43beb is in state STARTED 2025-05-29 00:59:33.392352 | orchestrator | 2025-05-29 00:59:33 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:59:33.392363 | orchestrator | 2025-05-29 00:59:33 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:59:36.456927 | orchestrator | 2025-05-29 00:59:36 | INFO  | Task a4ac4d31-ac5f-4bcf-91ec-b1480c6a93df is in state STARTED 2025-05-29 00:59:36.461154 | orchestrator | 2025-05-29 00:59:36 | INFO  | Task 3ebb5bb4-9a6f-4abc-97a4-594e45e8c174 is in state STARTED 2025-05-29 00:59:36.463574 | orchestrator | 2025-05-29 00:59:36 | INFO  | Task 3b52cf61-14bc-4b8a-b709-911dcbb43beb is in state STARTED 2025-05-29 00:59:36.465579 | orchestrator | 2025-05-29 00:59:36 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:59:36.465603 | orchestrator | 2025-05-29 00:59:36 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:59:39.515511 | orchestrator | 2025-05-29 00:59:39 | INFO  | Task a4ac4d31-ac5f-4bcf-91ec-b1480c6a93df is in state STARTED 2025-05-29 00:59:39.516774 | orchestrator | 2025-05-29 00:59:39 | INFO  | Task 3ebb5bb4-9a6f-4abc-97a4-594e45e8c174 is in state STARTED 2025-05-29 00:59:39.517866 | orchestrator | 2025-05-29 00:59:39 | INFO  | Task 3b52cf61-14bc-4b8a-b709-911dcbb43beb is in state STARTED 2025-05-29 00:59:39.519262 | orchestrator | 2025-05-29 00:59:39 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:59:39.519296 | orchestrator | 2025-05-29 00:59:39 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:59:42.563572 | orchestrator | 2025-05-29 00:59:42 | INFO  | Task a4ac4d31-ac5f-4bcf-91ec-b1480c6a93df is in state STARTED 2025-05-29 00:59:42.564917 | orchestrator | 2025-05-29 00:59:42 | INFO  | Task 3ebb5bb4-9a6f-4abc-97a4-594e45e8c174 is in state STARTED 2025-05-29 00:59:42.567565 | orchestrator | 2025-05-29 00:59:42 | INFO  | Task 3b52cf61-14bc-4b8a-b709-911dcbb43beb is in state STARTED 2025-05-29 00:59:42.568907 | orchestrator | 2025-05-29 00:59:42 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:59:42.568938 | orchestrator | 2025-05-29 00:59:42 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:59:45.623542 | orchestrator | 2025-05-29 00:59:45 | INFO  | Task a4ac4d31-ac5f-4bcf-91ec-b1480c6a93df is in state STARTED 2025-05-29 00:59:45.624704 | orchestrator | 2025-05-29 00:59:45 | INFO  | Task 3ebb5bb4-9a6f-4abc-97a4-594e45e8c174 is in state STARTED 2025-05-29 00:59:45.626299 | orchestrator | 2025-05-29 00:59:45 | INFO  | Task 3b52cf61-14bc-4b8a-b709-911dcbb43beb is in state STARTED 2025-05-29 00:59:45.627780 | orchestrator | 2025-05-29 00:59:45 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:59:45.627807 | orchestrator | 2025-05-29 00:59:45 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:59:48.679404 | orchestrator | 2025-05-29 00:59:48 | INFO  | Task a4ac4d31-ac5f-4bcf-91ec-b1480c6a93df is in state STARTED 2025-05-29 00:59:48.680818 | orchestrator | 2025-05-29 00:59:48 | INFO  | Task 3ebb5bb4-9a6f-4abc-97a4-594e45e8c174 is in state STARTED 2025-05-29 00:59:48.682352 | orchestrator | 2025-05-29 00:59:48 | INFO  | Task 3b52cf61-14bc-4b8a-b709-911dcbb43beb is in state STARTED 2025-05-29 00:59:48.683707 | orchestrator | 2025-05-29 00:59:48 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:59:48.683760 | orchestrator | 2025-05-29 00:59:48 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:59:51.740913 | orchestrator | 2025-05-29 00:59:51 | INFO  | Task a4ac4d31-ac5f-4bcf-91ec-b1480c6a93df is in state STARTED 2025-05-29 00:59:51.743427 | orchestrator | 2025-05-29 00:59:51 | INFO  | Task 3ebb5bb4-9a6f-4abc-97a4-594e45e8c174 is in state STARTED 2025-05-29 00:59:51.745072 | orchestrator | 2025-05-29 00:59:51 | INFO  | Task 3b52cf61-14bc-4b8a-b709-911dcbb43beb is in state STARTED 2025-05-29 00:59:51.747311 | orchestrator | 2025-05-29 00:59:51 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:59:51.747348 | orchestrator | 2025-05-29 00:59:51 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:59:54.799624 | orchestrator | 2025-05-29 00:59:54 | INFO  | Task a4ac4d31-ac5f-4bcf-91ec-b1480c6a93df is in state STARTED 2025-05-29 00:59:54.799844 | orchestrator | 2025-05-29 00:59:54 | INFO  | Task 3ebb5bb4-9a6f-4abc-97a4-594e45e8c174 is in state STARTED 2025-05-29 00:59:54.801214 | orchestrator | 2025-05-29 00:59:54 | INFO  | Task 3b52cf61-14bc-4b8a-b709-911dcbb43beb is in state STARTED 2025-05-29 00:59:54.802577 | orchestrator | 2025-05-29 00:59:54 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:59:54.802604 | orchestrator | 2025-05-29 00:59:54 | INFO  | Wait 1 second(s) until the next check 2025-05-29 00:59:57.863025 | orchestrator | 2025-05-29 00:59:57 | INFO  | Task a4ac4d31-ac5f-4bcf-91ec-b1480c6a93df is in state STARTED 2025-05-29 00:59:57.863452 | orchestrator | 2025-05-29 00:59:57 | INFO  | Task 3ebb5bb4-9a6f-4abc-97a4-594e45e8c174 is in state STARTED 2025-05-29 00:59:57.864925 | orchestrator | 2025-05-29 00:59:57 | INFO  | Task 3b52cf61-14bc-4b8a-b709-911dcbb43beb is in state STARTED 2025-05-29 00:59:57.866806 | orchestrator | 2025-05-29 00:59:57 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 00:59:57.866839 | orchestrator | 2025-05-29 00:59:57 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:00:00.921326 | orchestrator | 2025-05-29 01:00:00 | INFO  | Task a4ac4d31-ac5f-4bcf-91ec-b1480c6a93df is in state STARTED 2025-05-29 01:00:00.924593 | orchestrator | 2025-05-29 01:00:00 | INFO  | Task 3ebb5bb4-9a6f-4abc-97a4-594e45e8c174 is in state STARTED 2025-05-29 01:00:00.926190 | orchestrator | 2025-05-29 01:00:00 | INFO  | Task 3b52cf61-14bc-4b8a-b709-911dcbb43beb is in state STARTED 2025-05-29 01:00:00.927954 | orchestrator | 2025-05-29 01:00:00 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:00:00.927980 | orchestrator | 2025-05-29 01:00:00 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:00:03.984397 | orchestrator | 2025-05-29 01:00:03 | INFO  | Task a4ac4d31-ac5f-4bcf-91ec-b1480c6a93df is in state STARTED 2025-05-29 01:00:03.985671 | orchestrator | 2025-05-29 01:00:03 | INFO  | Task 3ebb5bb4-9a6f-4abc-97a4-594e45e8c174 is in state STARTED 2025-05-29 01:00:03.987361 | orchestrator | 2025-05-29 01:00:03 | INFO  | Task 3b52cf61-14bc-4b8a-b709-911dcbb43beb is in state STARTED 2025-05-29 01:00:03.988930 | orchestrator | 2025-05-29 01:00:03 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:00:03.988965 | orchestrator | 2025-05-29 01:00:03 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:00:07.044625 | orchestrator | 2025-05-29 01:00:07 | INFO  | Task a4ac4d31-ac5f-4bcf-91ec-b1480c6a93df is in state STARTED 2025-05-29 01:00:07.045716 | orchestrator | 2025-05-29 01:00:07 | INFO  | Task 3ebb5bb4-9a6f-4abc-97a4-594e45e8c174 is in state STARTED 2025-05-29 01:00:07.047923 | orchestrator | 2025-05-29 01:00:07 | INFO  | Task 3b52cf61-14bc-4b8a-b709-911dcbb43beb is in state STARTED 2025-05-29 01:00:07.049731 | orchestrator | 2025-05-29 01:00:07 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:00:07.049779 | orchestrator | 2025-05-29 01:00:07 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:00:10.105668 | orchestrator | 2025-05-29 01:00:10 | INFO  | Task a4ac4d31-ac5f-4bcf-91ec-b1480c6a93df is in state STARTED 2025-05-29 01:00:10.106285 | orchestrator | 2025-05-29 01:00:10 | INFO  | Task 3ebb5bb4-9a6f-4abc-97a4-594e45e8c174 is in state STARTED 2025-05-29 01:00:10.107596 | orchestrator | 2025-05-29 01:00:10 | INFO  | Task 3b52cf61-14bc-4b8a-b709-911dcbb43beb is in state STARTED 2025-05-29 01:00:10.108276 | orchestrator | 2025-05-29 01:00:10 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:00:10.108401 | orchestrator | 2025-05-29 01:00:10 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:00:13.157871 | orchestrator | 2025-05-29 01:00:13.158439 | orchestrator | 2025-05-29 01:00:13.158462 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-29 01:00:13.158476 | orchestrator | 2025-05-29 01:00:13.158487 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-29 01:00:13.158499 | orchestrator | Thursday 29 May 2025 00:58:35 +0000 (0:00:00.290) 0:00:00.290 ********** 2025-05-29 01:00:13.158510 | orchestrator | ok: [testbed-node-0] 2025-05-29 01:00:13.158522 | orchestrator | ok: [testbed-node-1] 2025-05-29 01:00:13.158533 | orchestrator | ok: [testbed-node-2] 2025-05-29 01:00:13.158544 | orchestrator | 2025-05-29 01:00:13.158555 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-29 01:00:13.158567 | orchestrator | Thursday 29 May 2025 00:58:36 +0000 (0:00:00.424) 0:00:00.714 ********** 2025-05-29 01:00:13.158578 | orchestrator | ok: [testbed-node-0] => (item=enable_horizon_True) 2025-05-29 01:00:13.158589 | orchestrator | ok: [testbed-node-1] => (item=enable_horizon_True) 2025-05-29 01:00:13.158600 | orchestrator | ok: [testbed-node-2] => (item=enable_horizon_True) 2025-05-29 01:00:13.158611 | orchestrator | 2025-05-29 01:00:13.158622 | orchestrator | PLAY [Apply role horizon] ****************************************************** 2025-05-29 01:00:13.158633 | orchestrator | 2025-05-29 01:00:13.158644 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-05-29 01:00:13.158655 | orchestrator | Thursday 29 May 2025 00:58:36 +0000 (0:00:00.344) 0:00:01.058 ********** 2025-05-29 01:00:13.158666 | orchestrator | included: /ansible/roles/horizon/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-29 01:00:13.158678 | orchestrator | 2025-05-29 01:00:13.158688 | orchestrator | TASK [horizon : Ensuring config directories exist] ***************************** 2025-05-29 01:00:13.158699 | orchestrator | Thursday 29 May 2025 00:58:37 +0000 (0:00:00.748) 0:00:01.807 ********** 2025-05-29 01:00:13.158733 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-29 01:00:13.158879 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-29 01:00:13.158910 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-29 01:00:13.159005 | orchestrator | 2025-05-29 01:00:13.159022 | orchestrator | TASK [horizon : Set empty custom policy] *************************************** 2025-05-29 01:00:13.159033 | orchestrator | Thursday 29 May 2025 00:58:38 +0000 (0:00:01.693) 0:00:03.501 ********** 2025-05-29 01:00:13.159046 | orchestrator | ok: [testbed-node-0] 2025-05-29 01:00:13.159057 | orchestrator | ok: [testbed-node-1] 2025-05-29 01:00:13.159068 | orchestrator | ok: [testbed-node-2] 2025-05-29 01:00:13.159079 | orchestrator | 2025-05-29 01:00:13.159090 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-05-29 01:00:13.159101 | orchestrator | Thursday 29 May 2025 00:58:39 +0000 (0:00:00.362) 0:00:03.863 ********** 2025-05-29 01:00:13.159122 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'cloudkitty', 'enabled': False})  2025-05-29 01:00:13.159158 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'ironic', 'enabled': False})  2025-05-29 01:00:13.159179 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'masakari', 'enabled': False})  2025-05-29 01:00:13.159197 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'mistral', 'enabled': False})  2025-05-29 01:00:13.159210 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'tacker', 'enabled': False})  2025-05-29 01:00:13.159221 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'trove', 'enabled': False})  2025-05-29 01:00:13.159232 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'watcher', 'enabled': False})  2025-05-29 01:00:13.159242 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'cloudkitty', 'enabled': False})  2025-05-29 01:00:13.159253 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'ironic', 'enabled': False})  2025-05-29 01:00:13.159264 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'masakari', 'enabled': False})  2025-05-29 01:00:13.159275 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'mistral', 'enabled': False})  2025-05-29 01:00:13.159285 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'tacker', 'enabled': False})  2025-05-29 01:00:13.159296 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'trove', 'enabled': False})  2025-05-29 01:00:13.159307 | orchestrator | skipping: [testbed-node-1] => (item={'name': 'watcher', 'enabled': False})  2025-05-29 01:00:13.159318 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'cloudkitty', 'enabled': False})  2025-05-29 01:00:13.159328 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'ironic', 'enabled': False})  2025-05-29 01:00:13.159339 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'masakari', 'enabled': False})  2025-05-29 01:00:13.159360 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'mistral', 'enabled': False})  2025-05-29 01:00:13.159371 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'tacker', 'enabled': False})  2025-05-29 01:00:13.159382 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'trove', 'enabled': False})  2025-05-29 01:00:13.159393 | orchestrator | skipping: [testbed-node-2] => (item={'name': 'watcher', 'enabled': False})  2025-05-29 01:00:13.159405 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'ceilometer', 'enabled': 'yes'}) 2025-05-29 01:00:13.159418 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'cinder', 'enabled': 'yes'}) 2025-05-29 01:00:13.159429 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'designate', 'enabled': True}) 2025-05-29 01:00:13.159440 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'glance', 'enabled': True}) 2025-05-29 01:00:13.159451 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'heat', 'enabled': True}) 2025-05-29 01:00:13.159463 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'keystone', 'enabled': True}) 2025-05-29 01:00:13.159474 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'magnum', 'enabled': True}) 2025-05-29 01:00:13.159485 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'manila', 'enabled': True}) 2025-05-29 01:00:13.159496 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'neutron', 'enabled': True}) 2025-05-29 01:00:13.159514 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'nova', 'enabled': True}) 2025-05-29 01:00:13.159525 | orchestrator | included: /ansible/roles/horizon/tasks/policy_item.yml for testbed-node-0, testbed-node-1, testbed-node-2 => (item={'name': 'octavia', 'enabled': True}) 2025-05-29 01:00:13.159536 | orchestrator | 2025-05-29 01:00:13.159547 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-29 01:00:13.159558 | orchestrator | Thursday 29 May 2025 00:58:40 +0000 (0:00:00.947) 0:00:04.810 ********** 2025-05-29 01:00:13.159569 | orchestrator | ok: [testbed-node-0] 2025-05-29 01:00:13.159580 | orchestrator | ok: [testbed-node-1] 2025-05-29 01:00:13.159591 | orchestrator | ok: [testbed-node-2] 2025-05-29 01:00:13.159602 | orchestrator | 2025-05-29 01:00:13.159613 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-29 01:00:13.159624 | orchestrator | Thursday 29 May 2025 00:58:40 +0000 (0:00:00.548) 0:00:05.358 ********** 2025-05-29 01:00:13.159635 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:00:13.159649 | orchestrator | 2025-05-29 01:00:13.159669 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-29 01:00:13.159682 | orchestrator | Thursday 29 May 2025 00:58:40 +0000 (0:00:00.124) 0:00:05.483 ********** 2025-05-29 01:00:13.159694 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:00:13.159707 | orchestrator | skipping: [testbed-node-1] 2025-05-29 01:00:13.159720 | orchestrator | skipping: [testbed-node-2] 2025-05-29 01:00:13.159733 | orchestrator | 2025-05-29 01:00:13.159746 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-29 01:00:13.159759 | orchestrator | Thursday 29 May 2025 00:58:41 +0000 (0:00:00.451) 0:00:05.934 ********** 2025-05-29 01:00:13.159793 | orchestrator | ok: [testbed-node-0] 2025-05-29 01:00:13.159806 | orchestrator | ok: [testbed-node-1] 2025-05-29 01:00:13.159818 | orchestrator | ok: [testbed-node-2] 2025-05-29 01:00:13.159831 | orchestrator | 2025-05-29 01:00:13.159843 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-29 01:00:13.159855 | orchestrator | Thursday 29 May 2025 00:58:41 +0000 (0:00:00.286) 0:00:06.220 ********** 2025-05-29 01:00:13.159868 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:00:13.159880 | orchestrator | 2025-05-29 01:00:13.159894 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-29 01:00:13.159906 | orchestrator | Thursday 29 May 2025 00:58:41 +0000 (0:00:00.123) 0:00:06.343 ********** 2025-05-29 01:00:13.159918 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:00:13.159932 | orchestrator | skipping: [testbed-node-1] 2025-05-29 01:00:13.159944 | orchestrator | skipping: [testbed-node-2] 2025-05-29 01:00:13.159957 | orchestrator | 2025-05-29 01:00:13.159969 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-29 01:00:13.159982 | orchestrator | Thursday 29 May 2025 00:58:42 +0000 (0:00:00.446) 0:00:06.790 ********** 2025-05-29 01:00:13.159994 | orchestrator | ok: [testbed-node-0] 2025-05-29 01:00:13.160006 | orchestrator | ok: [testbed-node-1] 2025-05-29 01:00:13.160017 | orchestrator | ok: [testbed-node-2] 2025-05-29 01:00:13.160027 | orchestrator | 2025-05-29 01:00:13.160038 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-29 01:00:13.160049 | orchestrator | Thursday 29 May 2025 00:58:42 +0000 (0:00:00.484) 0:00:07.274 ********** 2025-05-29 01:00:13.160059 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:00:13.160070 | orchestrator | 2025-05-29 01:00:13.160080 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-29 01:00:13.160091 | orchestrator | Thursday 29 May 2025 00:58:42 +0000 (0:00:00.119) 0:00:07.393 ********** 2025-05-29 01:00:13.160102 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:00:13.160112 | orchestrator | skipping: [testbed-node-1] 2025-05-29 01:00:13.160123 | orchestrator | skipping: [testbed-node-2] 2025-05-29 01:00:13.160156 | orchestrator | 2025-05-29 01:00:13.160169 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-29 01:00:13.160180 | orchestrator | Thursday 29 May 2025 00:58:43 +0000 (0:00:00.477) 0:00:07.871 ********** 2025-05-29 01:00:13.160191 | orchestrator | ok: [testbed-node-0] 2025-05-29 01:00:13.160202 | orchestrator | ok: [testbed-node-1] 2025-05-29 01:00:13.160213 | orchestrator | ok: [testbed-node-2] 2025-05-29 01:00:13.160224 | orchestrator | 2025-05-29 01:00:13.160234 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-29 01:00:13.160245 | orchestrator | Thursday 29 May 2025 00:58:43 +0000 (0:00:00.431) 0:00:08.302 ********** 2025-05-29 01:00:13.160256 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:00:13.160267 | orchestrator | 2025-05-29 01:00:13.160298 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-29 01:00:13.160310 | orchestrator | Thursday 29 May 2025 00:58:43 +0000 (0:00:00.142) 0:00:08.445 ********** 2025-05-29 01:00:13.160321 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:00:13.160331 | orchestrator | skipping: [testbed-node-1] 2025-05-29 01:00:13.160342 | orchestrator | skipping: [testbed-node-2] 2025-05-29 01:00:13.160352 | orchestrator | 2025-05-29 01:00:13.160363 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-29 01:00:13.160374 | orchestrator | Thursday 29 May 2025 00:58:44 +0000 (0:00:00.440) 0:00:08.885 ********** 2025-05-29 01:00:13.160384 | orchestrator | ok: [testbed-node-0] 2025-05-29 01:00:13.160395 | orchestrator | ok: [testbed-node-1] 2025-05-29 01:00:13.160406 | orchestrator | ok: [testbed-node-2] 2025-05-29 01:00:13.160416 | orchestrator | 2025-05-29 01:00:13.160427 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-29 01:00:13.160438 | orchestrator | Thursday 29 May 2025 00:58:44 +0000 (0:00:00.342) 0:00:09.227 ********** 2025-05-29 01:00:13.160449 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:00:13.160468 | orchestrator | 2025-05-29 01:00:13.160479 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-29 01:00:13.160490 | orchestrator | Thursday 29 May 2025 00:58:44 +0000 (0:00:00.265) 0:00:09.493 ********** 2025-05-29 01:00:13.160500 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:00:13.160511 | orchestrator | skipping: [testbed-node-1] 2025-05-29 01:00:13.160521 | orchestrator | skipping: [testbed-node-2] 2025-05-29 01:00:13.160532 | orchestrator | 2025-05-29 01:00:13.160543 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-29 01:00:13.160553 | orchestrator | Thursday 29 May 2025 00:58:45 +0000 (0:00:00.278) 0:00:09.771 ********** 2025-05-29 01:00:13.160564 | orchestrator | ok: [testbed-node-0] 2025-05-29 01:00:13.160575 | orchestrator | ok: [testbed-node-1] 2025-05-29 01:00:13.160586 | orchestrator | ok: [testbed-node-2] 2025-05-29 01:00:13.160596 | orchestrator | 2025-05-29 01:00:13.160607 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-29 01:00:13.160618 | orchestrator | Thursday 29 May 2025 00:58:45 +0000 (0:00:00.508) 0:00:10.279 ********** 2025-05-29 01:00:13.160628 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:00:13.160639 | orchestrator | 2025-05-29 01:00:13.160650 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-29 01:00:13.160660 | orchestrator | Thursday 29 May 2025 00:58:45 +0000 (0:00:00.132) 0:00:10.412 ********** 2025-05-29 01:00:13.160671 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:00:13.160682 | orchestrator | skipping: [testbed-node-1] 2025-05-29 01:00:13.160693 | orchestrator | skipping: [testbed-node-2] 2025-05-29 01:00:13.160703 | orchestrator | 2025-05-29 01:00:13.160714 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-29 01:00:13.160725 | orchestrator | Thursday 29 May 2025 00:58:46 +0000 (0:00:00.552) 0:00:10.965 ********** 2025-05-29 01:00:13.160742 | orchestrator | ok: [testbed-node-0] 2025-05-29 01:00:13.160753 | orchestrator | ok: [testbed-node-1] 2025-05-29 01:00:13.160764 | orchestrator | ok: [testbed-node-2] 2025-05-29 01:00:13.160775 | orchestrator | 2025-05-29 01:00:13.160786 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-29 01:00:13.160796 | orchestrator | Thursday 29 May 2025 00:58:46 +0000 (0:00:00.562) 0:00:11.528 ********** 2025-05-29 01:00:13.160807 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:00:13.160818 | orchestrator | 2025-05-29 01:00:13.160828 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-29 01:00:13.160839 | orchestrator | Thursday 29 May 2025 00:58:47 +0000 (0:00:00.142) 0:00:11.671 ********** 2025-05-29 01:00:13.160849 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:00:13.160860 | orchestrator | skipping: [testbed-node-1] 2025-05-29 01:00:13.160871 | orchestrator | skipping: [testbed-node-2] 2025-05-29 01:00:13.160881 | orchestrator | 2025-05-29 01:00:13.160892 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-29 01:00:13.160903 | orchestrator | Thursday 29 May 2025 00:58:47 +0000 (0:00:00.406) 0:00:12.077 ********** 2025-05-29 01:00:13.160914 | orchestrator | ok: [testbed-node-0] 2025-05-29 01:00:13.160924 | orchestrator | ok: [testbed-node-1] 2025-05-29 01:00:13.160935 | orchestrator | ok: [testbed-node-2] 2025-05-29 01:00:13.160946 | orchestrator | 2025-05-29 01:00:13.160956 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-29 01:00:13.160967 | orchestrator | Thursday 29 May 2025 00:58:47 +0000 (0:00:00.260) 0:00:12.338 ********** 2025-05-29 01:00:13.160978 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:00:13.160988 | orchestrator | 2025-05-29 01:00:13.160999 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-29 01:00:13.161010 | orchestrator | Thursday 29 May 2025 00:58:48 +0000 (0:00:00.269) 0:00:12.608 ********** 2025-05-29 01:00:13.161020 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:00:13.161031 | orchestrator | skipping: [testbed-node-1] 2025-05-29 01:00:13.161042 | orchestrator | skipping: [testbed-node-2] 2025-05-29 01:00:13.161053 | orchestrator | 2025-05-29 01:00:13.161073 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-29 01:00:13.161092 | orchestrator | Thursday 29 May 2025 00:58:48 +0000 (0:00:00.263) 0:00:12.871 ********** 2025-05-29 01:00:13.161111 | orchestrator | ok: [testbed-node-0] 2025-05-29 01:00:13.161129 | orchestrator | ok: [testbed-node-1] 2025-05-29 01:00:13.161238 | orchestrator | ok: [testbed-node-2] 2025-05-29 01:00:13.161260 | orchestrator | 2025-05-29 01:00:13.161280 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-29 01:00:13.161301 | orchestrator | Thursday 29 May 2025 00:58:48 +0000 (0:00:00.367) 0:00:13.239 ********** 2025-05-29 01:00:13.161320 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:00:13.161339 | orchestrator | 2025-05-29 01:00:13.161356 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-29 01:00:13.161378 | orchestrator | Thursday 29 May 2025 00:58:48 +0000 (0:00:00.100) 0:00:13.339 ********** 2025-05-29 01:00:13.161398 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:00:13.161417 | orchestrator | skipping: [testbed-node-1] 2025-05-29 01:00:13.161436 | orchestrator | skipping: [testbed-node-2] 2025-05-29 01:00:13.161455 | orchestrator | 2025-05-29 01:00:13.161474 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-29 01:00:13.161526 | orchestrator | Thursday 29 May 2025 00:58:49 +0000 (0:00:00.359) 0:00:13.699 ********** 2025-05-29 01:00:13.161545 | orchestrator | ok: [testbed-node-0] 2025-05-29 01:00:13.161564 | orchestrator | ok: [testbed-node-1] 2025-05-29 01:00:13.161583 | orchestrator | ok: [testbed-node-2] 2025-05-29 01:00:13.161602 | orchestrator | 2025-05-29 01:00:13.161621 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-29 01:00:13.161641 | orchestrator | Thursday 29 May 2025 00:58:49 +0000 (0:00:00.369) 0:00:14.068 ********** 2025-05-29 01:00:13.161660 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:00:13.161680 | orchestrator | 2025-05-29 01:00:13.161699 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-29 01:00:13.161751 | orchestrator | Thursday 29 May 2025 00:58:49 +0000 (0:00:00.112) 0:00:14.181 ********** 2025-05-29 01:00:13.161761 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:00:13.161771 | orchestrator | skipping: [testbed-node-1] 2025-05-29 01:00:13.161781 | orchestrator | skipping: [testbed-node-2] 2025-05-29 01:00:13.161791 | orchestrator | 2025-05-29 01:00:13.161800 | orchestrator | TASK [horizon : Update policy file name] *************************************** 2025-05-29 01:00:13.161810 | orchestrator | Thursday 29 May 2025 00:58:50 +0000 (0:00:00.444) 0:00:14.625 ********** 2025-05-29 01:00:13.161820 | orchestrator | ok: [testbed-node-0] 2025-05-29 01:00:13.161829 | orchestrator | ok: [testbed-node-1] 2025-05-29 01:00:13.161858 | orchestrator | ok: [testbed-node-2] 2025-05-29 01:00:13.161868 | orchestrator | 2025-05-29 01:00:13.161878 | orchestrator | TASK [horizon : Check if policies shall be overwritten] ************************ 2025-05-29 01:00:13.161888 | orchestrator | Thursday 29 May 2025 00:58:50 +0000 (0:00:00.402) 0:00:15.028 ********** 2025-05-29 01:00:13.161897 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:00:13.161907 | orchestrator | 2025-05-29 01:00:13.161916 | orchestrator | TASK [horizon : Update custom policy file name] ******************************** 2025-05-29 01:00:13.161931 | orchestrator | Thursday 29 May 2025 00:58:50 +0000 (0:00:00.099) 0:00:15.127 ********** 2025-05-29 01:00:13.161941 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:00:13.161951 | orchestrator | skipping: [testbed-node-1] 2025-05-29 01:00:13.161960 | orchestrator | skipping: [testbed-node-2] 2025-05-29 01:00:13.161970 | orchestrator | 2025-05-29 01:00:13.161979 | orchestrator | TASK [horizon : Copying over config.json files for services] ******************* 2025-05-29 01:00:13.161989 | orchestrator | Thursday 29 May 2025 00:58:50 +0000 (0:00:00.340) 0:00:15.468 ********** 2025-05-29 01:00:13.161999 | orchestrator | changed: [testbed-node-2] 2025-05-29 01:00:13.162008 | orchestrator | changed: [testbed-node-1] 2025-05-29 01:00:13.162074 | orchestrator | changed: [testbed-node-0] 2025-05-29 01:00:13.162087 | orchestrator | 2025-05-29 01:00:13.162097 | orchestrator | TASK [horizon : Copying over horizon.conf] ************************************* 2025-05-29 01:00:13.162117 | orchestrator | Thursday 29 May 2025 00:58:54 +0000 (0:00:03.997) 0:00:19.465 ********** 2025-05-29 01:00:13.162127 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-05-29 01:00:13.162172 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-05-29 01:00:13.162183 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/horizon.conf.j2) 2025-05-29 01:00:13.162192 | orchestrator | 2025-05-29 01:00:13.162202 | orchestrator | TASK [horizon : Copying over kolla-settings.py] ******************************** 2025-05-29 01:00:13.162212 | orchestrator | Thursday 29 May 2025 00:58:57 +0000 (0:00:02.570) 0:00:22.036 ********** 2025-05-29 01:00:13.162222 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-05-29 01:00:13.162232 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-05-29 01:00:13.162242 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9998-kolla-settings.py.j2) 2025-05-29 01:00:13.162251 | orchestrator | 2025-05-29 01:00:13.162261 | orchestrator | TASK [horizon : Copying over custom-settings.py] ******************************* 2025-05-29 01:00:13.162271 | orchestrator | Thursday 29 May 2025 00:59:00 +0000 (0:00:02.608) 0:00:24.645 ********** 2025-05-29 01:00:13.162280 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-05-29 01:00:13.162290 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-05-29 01:00:13.162300 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/horizon/templates/_9999-custom-settings.py.j2) 2025-05-29 01:00:13.162310 | orchestrator | 2025-05-29 01:00:13.162319 | orchestrator | TASK [horizon : Copying over existing policy file] ***************************** 2025-05-29 01:00:13.162329 | orchestrator | Thursday 29 May 2025 00:59:02 +0000 (0:00:02.264) 0:00:26.909 ********** 2025-05-29 01:00:13.162339 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:00:13.162348 | orchestrator | skipping: [testbed-node-1] 2025-05-29 01:00:13.162358 | orchestrator | skipping: [testbed-node-2] 2025-05-29 01:00:13.162367 | orchestrator | 2025-05-29 01:00:13.162377 | orchestrator | TASK [horizon : Copying over custom themes] ************************************ 2025-05-29 01:00:13.162387 | orchestrator | Thursday 29 May 2025 00:59:02 +0000 (0:00:00.378) 0:00:27.288 ********** 2025-05-29 01:00:13.162396 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:00:13.162406 | orchestrator | skipping: [testbed-node-1] 2025-05-29 01:00:13.162416 | orchestrator | skipping: [testbed-node-2] 2025-05-29 01:00:13.162425 | orchestrator | 2025-05-29 01:00:13.162435 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-05-29 01:00:13.162445 | orchestrator | Thursday 29 May 2025 00:59:03 +0000 (0:00:00.363) 0:00:27.651 ********** 2025-05-29 01:00:13.162455 | orchestrator | included: /ansible/roles/horizon/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-29 01:00:13.162465 | orchestrator | 2025-05-29 01:00:13.162475 | orchestrator | TASK [service-cert-copy : horizon | Copying over extra CA certificates] ******** 2025-05-29 01:00:13.162484 | orchestrator | Thursday 29 May 2025 00:59:03 +0000 (0:00:00.551) 0:00:28.203 ********** 2025-05-29 01:00:13.162516 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-29 01:00:13.162537 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-29 01:00:13.162563 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-29 01:00:13.162581 | orchestrator | 2025-05-29 01:00:13.162591 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS certificate] *** 2025-05-29 01:00:13.162601 | orchestrator | Thursday 29 May 2025 00:59:05 +0000 (0:00:01.645) 0:00:29.848 ********** 2025-05-29 01:00:13.162612 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-05-29 01:00:13.162628 | orchestrator | skipping: [testbed-node-1] 2025-05-29 01:00:13.162649 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backen2025-05-29 01:00:13 | INFO  | Task a4ac4d31-ac5f-4bcf-91ec-b1480c6a93df is in state SUCCESS 2025-05-29 01:00:13.162662 | orchestrator | d': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-05-29 01:00:13.162674 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:00:13.162685 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-05-29 01:00:13.162701 | orchestrator | skipping: [testbed-node-2] 2025-05-29 01:00:13.162712 | orchestrator | 2025-05-29 01:00:13.162721 | orchestrator | TASK [service-cert-copy : horizon | Copying over backend internal TLS key] ***** 2025-05-29 01:00:13.162735 | orchestrator | Thursday 29 May 2025 00:59:06 +0000 (0:00:01.183) 0:00:31.032 ********** 2025-05-29 01:00:13.162754 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-05-29 01:00:13.162766 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:00:13.162781 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-05-29 01:00:13.162799 | orchestrator | skipping: [testbed-node-1] 2025-05-29 01:00:13.162818 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}})  2025-05-29 01:00:13.162829 | orchestrator | skipping: [testbed-node-2] 2025-05-29 01:00:13.162839 | orchestrator | 2025-05-29 01:00:13.162849 | orchestrator | TASK [horizon : Deploy horizon container] ************************************** 2025-05-29 01:00:13.162858 | orchestrator | Thursday 29 May 2025 00:59:07 +0000 (0:00:01.162) 0:00:32.195 ********** 2025-05-29 01:00:13.162886 | orchestrator | changed: [testbed-node-0] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-29 01:00:13.162898 | orchestrator | changed: [testbed-node-2] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-29 01:00:13.162929 | orchestrator | changed: [testbed-node-1] => (item={'key': 'horizon', 'value': {'container_name': 'horizon', 'group': 'horizon', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/horizon:24.0.1.20241206', 'environment': {'ENABLE_BLAZAR': 'no', 'ENABLE_CLOUDKITTY': 'no', 'ENABLE_DESIGNATE': 'yes', 'ENABLE_FWAAS': 'no', 'ENABLE_HEAT': 'yes', 'ENABLE_IRONIC': 'no', 'ENABLE_MAGNUM': 'yes', 'ENABLE_MANILA': 'yes', 'ENABLE_MASAKARI': 'no', 'ENABLE_MISTRAL': 'no', 'ENABLE_NEUTRON_VPNAAS': 'no', 'ENABLE_OCTAVIA': 'yes', 'ENABLE_TACKER': 'no', 'ENABLE_TROVE': 'no', 'ENABLE_WATCHER': 'no', 'ENABLE_ZUN': 'no', 'FORCE_GENERATE': 'no'}, 'volumes': ['/etc/kolla/horizon/:/var/lib/kolla/config_files/:ro', '', '', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:80'], 'timeout': '30'}, 'haproxy': {'horizon': {'enabled': True, 'mode': 'http', 'external': False, 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_redirect': {'enabled': True, 'mode': 'redirect', 'external': False, 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'horizon_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '443', 'listen_port': '80', 'frontend_http_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }'], 'backend_http_extra': ['balance roundrobin'], 'tls_backend': 'no'}, 'horizon_external_redirect': {'enabled': True, 'mode': 'redirect', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '80', 'listen_port': '80', 'frontend_redirect_extra': ['use_backend acme_client_back if { path_reg ^/.well-known/acme-challenge/.+ }']}, 'acme_client': {'enabled': True, 'with_frontend': False, 'custom_member_list': []}}}}) 2025-05-29 01:00:13.162940 | orchestrator | 2025-05-29 01:00:13.162950 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-05-29 01:00:13.162959 | orchestrator | Thursday 29 May 2025 00:59:12 +0000 (0:00:04.987) 0:00:37.182 ********** 2025-05-29 01:00:13.162969 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:00:13.162979 | orchestrator | skipping: [testbed-node-1] 2025-05-29 01:00:13.162989 | orchestrator | skipping: [testbed-node-2] 2025-05-29 01:00:13.162998 | orchestrator | 2025-05-29 01:00:13.163008 | orchestrator | TASK [horizon : include_tasks] ************************************************* 2025-05-29 01:00:13.163017 | orchestrator | Thursday 29 May 2025 00:59:12 +0000 (0:00:00.325) 0:00:37.508 ********** 2025-05-29 01:00:13.163027 | orchestrator | included: /ansible/roles/horizon/tasks/bootstrap.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-29 01:00:13.163037 | orchestrator | 2025-05-29 01:00:13.163046 | orchestrator | TASK [horizon : Creating Horizon database] ************************************* 2025-05-29 01:00:13.163056 | orchestrator | Thursday 29 May 2025 00:59:13 +0000 (0:00:00.497) 0:00:38.005 ********** 2025-05-29 01:00:13.163065 | orchestrator | changed: [testbed-node-0] 2025-05-29 01:00:13.163075 | orchestrator | 2025-05-29 01:00:13.163084 | orchestrator | TASK [horizon : Creating Horizon database user and setting permissions] ******** 2025-05-29 01:00:13.163094 | orchestrator | Thursday 29 May 2025 00:59:15 +0000 (0:00:02.489) 0:00:40.495 ********** 2025-05-29 01:00:13.163103 | orchestrator | changed: [testbed-node-0] 2025-05-29 01:00:13.163113 | orchestrator | 2025-05-29 01:00:13.163123 | orchestrator | TASK [horizon : Running Horizon bootstrap container] *************************** 2025-05-29 01:00:13.163161 | orchestrator | Thursday 29 May 2025 00:59:18 +0000 (0:00:02.329) 0:00:42.824 ********** 2025-05-29 01:00:13.163175 | orchestrator | changed: [testbed-node-0] 2025-05-29 01:00:13.163185 | orchestrator | 2025-05-29 01:00:13.163195 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-05-29 01:00:13.163205 | orchestrator | Thursday 29 May 2025 00:59:31 +0000 (0:00:13.538) 0:00:56.363 ********** 2025-05-29 01:00:13.163215 | orchestrator | 2025-05-29 01:00:13.163224 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-05-29 01:00:13.163234 | orchestrator | Thursday 29 May 2025 00:59:31 +0000 (0:00:00.058) 0:00:56.421 ********** 2025-05-29 01:00:13.163244 | orchestrator | 2025-05-29 01:00:13.163253 | orchestrator | TASK [horizon : Flush handlers] ************************************************ 2025-05-29 01:00:13.163263 | orchestrator | Thursday 29 May 2025 00:59:32 +0000 (0:00:00.179) 0:00:56.601 ********** 2025-05-29 01:00:13.163273 | orchestrator | 2025-05-29 01:00:13.163282 | orchestrator | RUNNING HANDLER [horizon : Restart horizon container] ************************** 2025-05-29 01:00:13.163292 | orchestrator | Thursday 29 May 2025 00:59:32 +0000 (0:00:00.060) 0:00:56.662 ********** 2025-05-29 01:00:13.163302 | orchestrator | changed: [testbed-node-0] 2025-05-29 01:00:13.163311 | orchestrator | changed: [testbed-node-1] 2025-05-29 01:00:13.163321 | orchestrator | changed: [testbed-node-2] 2025-05-29 01:00:13.163331 | orchestrator | 2025-05-29 01:00:13.163341 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-29 01:00:13.163350 | orchestrator | testbed-node-0 : ok=39  changed=11  unreachable=0 failed=0 skipped=27  rescued=0 ignored=0 2025-05-29 01:00:13.163361 | orchestrator | testbed-node-1 : ok=36  changed=8  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2025-05-29 01:00:13.163371 | orchestrator | testbed-node-2 : ok=36  changed=8  unreachable=0 failed=0 skipped=16  rescued=0 ignored=0 2025-05-29 01:00:13.163381 | orchestrator | 2025-05-29 01:00:13.163391 | orchestrator | 2025-05-29 01:00:13.163400 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-29 01:00:13.163410 | orchestrator | Thursday 29 May 2025 01:00:10 +0000 (0:00:38.857) 0:01:35.520 ********** 2025-05-29 01:00:13.163419 | orchestrator | =============================================================================== 2025-05-29 01:00:13.163429 | orchestrator | horizon : Restart horizon container ------------------------------------ 38.86s 2025-05-29 01:00:13.163439 | orchestrator | horizon : Running Horizon bootstrap container -------------------------- 13.54s 2025-05-29 01:00:13.163456 | orchestrator | horizon : Deploy horizon container -------------------------------------- 4.99s 2025-05-29 01:00:13.163466 | orchestrator | horizon : Copying over config.json files for services ------------------- 4.00s 2025-05-29 01:00:13.163476 | orchestrator | horizon : Copying over kolla-settings.py -------------------------------- 2.61s 2025-05-29 01:00:13.163486 | orchestrator | horizon : Copying over horizon.conf ------------------------------------- 2.57s 2025-05-29 01:00:13.163496 | orchestrator | horizon : Creating Horizon database ------------------------------------- 2.49s 2025-05-29 01:00:13.163505 | orchestrator | horizon : Creating Horizon database user and setting permissions -------- 2.33s 2025-05-29 01:00:13.163515 | orchestrator | horizon : Copying over custom-settings.py ------------------------------- 2.26s 2025-05-29 01:00:13.163524 | orchestrator | horizon : Ensuring config directories exist ----------------------------- 1.69s 2025-05-29 01:00:13.163534 | orchestrator | service-cert-copy : horizon | Copying over extra CA certificates -------- 1.65s 2025-05-29 01:00:13.163551 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS certificate --- 1.18s 2025-05-29 01:00:13.163561 | orchestrator | service-cert-copy : horizon | Copying over backend internal TLS key ----- 1.16s 2025-05-29 01:00:13.163570 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.95s 2025-05-29 01:00:13.163579 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.75s 2025-05-29 01:00:13.163595 | orchestrator | horizon : Update policy file name --------------------------------------- 0.56s 2025-05-29 01:00:13.163604 | orchestrator | horizon : Update custom policy file name -------------------------------- 0.55s 2025-05-29 01:00:13.163613 | orchestrator | horizon : include_tasks ------------------------------------------------- 0.55s 2025-05-29 01:00:13.163623 | orchestrator | horizon : Update policy file name --------------------------------------- 0.55s 2025-05-29 01:00:13.163632 | orchestrator | horizon : Update policy file name --------------------------------------- 0.51s 2025-05-29 01:00:13.163642 | orchestrator | 2025-05-29 01:00:13 | INFO  | Task 3ebb5bb4-9a6f-4abc-97a4-594e45e8c174 is in state STARTED 2025-05-29 01:00:13.163652 | orchestrator | 2025-05-29 01:00:13 | INFO  | Task 3b52cf61-14bc-4b8a-b709-911dcbb43beb is in state STARTED 2025-05-29 01:00:13.163661 | orchestrator | 2025-05-29 01:00:13 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:00:13.163671 | orchestrator | 2025-05-29 01:00:13 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:00:16.211861 | orchestrator | 2025-05-29 01:00:16 | INFO  | Task 3ebb5bb4-9a6f-4abc-97a4-594e45e8c174 is in state STARTED 2025-05-29 01:00:16.213247 | orchestrator | 2025-05-29 01:00:16 | INFO  | Task 3b52cf61-14bc-4b8a-b709-911dcbb43beb is in state STARTED 2025-05-29 01:00:16.214693 | orchestrator | 2025-05-29 01:00:16 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:00:16.214742 | orchestrator | 2025-05-29 01:00:16 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:00:19.259427 | orchestrator | 2025-05-29 01:00:19 | INFO  | Task 3ebb5bb4-9a6f-4abc-97a4-594e45e8c174 is in state STARTED 2025-05-29 01:00:19.260556 | orchestrator | 2025-05-29 01:00:19 | INFO  | Task 3b52cf61-14bc-4b8a-b709-911dcbb43beb is in state STARTED 2025-05-29 01:00:19.262425 | orchestrator | 2025-05-29 01:00:19 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:00:19.262453 | orchestrator | 2025-05-29 01:00:19 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:00:22.310981 | orchestrator | 2025-05-29 01:00:22 | INFO  | Task 3ebb5bb4-9a6f-4abc-97a4-594e45e8c174 is in state STARTED 2025-05-29 01:00:22.311836 | orchestrator | 2025-05-29 01:00:22 | INFO  | Task 3b52cf61-14bc-4b8a-b709-911dcbb43beb is in state STARTED 2025-05-29 01:00:22.312936 | orchestrator | 2025-05-29 01:00:22 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:00:22.312971 | orchestrator | 2025-05-29 01:00:22 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:00:25.372399 | orchestrator | 2025-05-29 01:00:25 | INFO  | Task 3ebb5bb4-9a6f-4abc-97a4-594e45e8c174 is in state STARTED 2025-05-29 01:00:25.376624 | orchestrator | 2025-05-29 01:00:25 | INFO  | Task 3b52cf61-14bc-4b8a-b709-911dcbb43beb is in state STARTED 2025-05-29 01:00:25.377175 | orchestrator | 2025-05-29 01:00:25 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:00:25.377282 | orchestrator | 2025-05-29 01:00:25 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:00:28.428650 | orchestrator | 2025-05-29 01:00:28 | INFO  | Task 3ebb5bb4-9a6f-4abc-97a4-594e45e8c174 is in state STARTED 2025-05-29 01:00:28.429656 | orchestrator | 2025-05-29 01:00:28 | INFO  | Task 3b52cf61-14bc-4b8a-b709-911dcbb43beb is in state STARTED 2025-05-29 01:00:28.431994 | orchestrator | 2025-05-29 01:00:28 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:00:28.432040 | orchestrator | 2025-05-29 01:00:28 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:00:31.483270 | orchestrator | 2025-05-29 01:00:31 | INFO  | Task 3ebb5bb4-9a6f-4abc-97a4-594e45e8c174 is in state STARTED 2025-05-29 01:00:31.485095 | orchestrator | 2025-05-29 01:00:31 | INFO  | Task 3b52cf61-14bc-4b8a-b709-911dcbb43beb is in state STARTED 2025-05-29 01:00:31.486629 | orchestrator | 2025-05-29 01:00:31 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:00:31.486857 | orchestrator | 2025-05-29 01:00:31 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:00:34.537567 | orchestrator | 2025-05-29 01:00:34 | INFO  | Task 3ebb5bb4-9a6f-4abc-97a4-594e45e8c174 is in state STARTED 2025-05-29 01:00:34.538810 | orchestrator | 2025-05-29 01:00:34 | INFO  | Task 3b52cf61-14bc-4b8a-b709-911dcbb43beb is in state STARTED 2025-05-29 01:00:34.540760 | orchestrator | 2025-05-29 01:00:34 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:00:34.541407 | orchestrator | 2025-05-29 01:00:34 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:00:37.607761 | orchestrator | 2025-05-29 01:00:37.607968 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.12 2025-05-29 01:00:37.607985 | orchestrator | 2025-05-29 01:00:37.607997 | orchestrator | PLAY [Create ceph pools] ******************************************************* 2025-05-29 01:00:37.608009 | orchestrator | 2025-05-29 01:00:37.608021 | orchestrator | TASK [ceph-facts : include_tasks convert_grafana_server_group_name.yml] ******** 2025-05-29 01:00:37.608033 | orchestrator | Thursday 29 May 2025 00:58:26 +0000 (0:00:01.096) 0:00:01.096 ********** 2025-05-29 01:00:37.608045 | orchestrator | included: /ansible/roles/ceph-facts/tasks/convert_grafana_server_group_name.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-29 01:00:37.608491 | orchestrator | 2025-05-29 01:00:37.608503 | orchestrator | TASK [ceph-facts : convert grafana-server group name if exist] ***************** 2025-05-29 01:00:37.608514 | orchestrator | Thursday 29 May 2025 00:58:27 +0000 (0:00:00.523) 0:00:01.620 ********** 2025-05-29 01:00:37.608526 | orchestrator | changed: [testbed-node-3] => (item=testbed-node-0) 2025-05-29 01:00:37.608538 | orchestrator | changed: [testbed-node-3] => (item=testbed-node-1) 2025-05-29 01:00:37.608549 | orchestrator | changed: [testbed-node-3] => (item=testbed-node-2) 2025-05-29 01:00:37.608560 | orchestrator | 2025-05-29 01:00:37.608571 | orchestrator | TASK [ceph-facts : include facts.yml] ****************************************** 2025-05-29 01:00:37.608582 | orchestrator | Thursday 29 May 2025 00:58:28 +0000 (0:00:00.808) 0:00:02.428 ********** 2025-05-29 01:00:37.608593 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-29 01:00:37.608605 | orchestrator | 2025-05-29 01:00:37.608616 | orchestrator | TASK [ceph-facts : check if it is atomic host] ********************************* 2025-05-29 01:00:37.608626 | orchestrator | Thursday 29 May 2025 00:58:28 +0000 (0:00:00.734) 0:00:03.163 ********** 2025-05-29 01:00:37.608637 | orchestrator | ok: [testbed-node-3] 2025-05-29 01:00:37.608648 | orchestrator | ok: [testbed-node-5] 2025-05-29 01:00:37.608659 | orchestrator | ok: [testbed-node-4] 2025-05-29 01:00:37.608670 | orchestrator | 2025-05-29 01:00:37.608681 | orchestrator | TASK [ceph-facts : set_fact is_atomic] ***************************************** 2025-05-29 01:00:37.608692 | orchestrator | Thursday 29 May 2025 00:58:29 +0000 (0:00:00.743) 0:00:03.907 ********** 2025-05-29 01:00:37.608703 | orchestrator | ok: [testbed-node-3] 2025-05-29 01:00:37.608714 | orchestrator | ok: [testbed-node-4] 2025-05-29 01:00:37.608725 | orchestrator | ok: [testbed-node-5] 2025-05-29 01:00:37.608735 | orchestrator | 2025-05-29 01:00:37.608746 | orchestrator | TASK [ceph-facts : check if podman binary is present] ************************** 2025-05-29 01:00:37.608757 | orchestrator | Thursday 29 May 2025 00:58:29 +0000 (0:00:00.304) 0:00:04.212 ********** 2025-05-29 01:00:37.608768 | orchestrator | ok: [testbed-node-3] 2025-05-29 01:00:37.608849 | orchestrator | ok: [testbed-node-4] 2025-05-29 01:00:37.608864 | orchestrator | ok: [testbed-node-5] 2025-05-29 01:00:37.608876 | orchestrator | 2025-05-29 01:00:37.608887 | orchestrator | TASK [ceph-facts : set_fact container_binary] ********************************** 2025-05-29 01:00:37.608921 | orchestrator | Thursday 29 May 2025 00:58:30 +0000 (0:00:00.854) 0:00:05.066 ********** 2025-05-29 01:00:37.608932 | orchestrator | ok: [testbed-node-3] 2025-05-29 01:00:37.608943 | orchestrator | ok: [testbed-node-4] 2025-05-29 01:00:37.608953 | orchestrator | ok: [testbed-node-5] 2025-05-29 01:00:37.608964 | orchestrator | 2025-05-29 01:00:37.608975 | orchestrator | TASK [ceph-facts : set_fact ceph_cmd] ****************************************** 2025-05-29 01:00:37.608986 | orchestrator | Thursday 29 May 2025 00:58:31 +0000 (0:00:00.382) 0:00:05.448 ********** 2025-05-29 01:00:37.608997 | orchestrator | ok: [testbed-node-3] 2025-05-29 01:00:37.609007 | orchestrator | ok: [testbed-node-4] 2025-05-29 01:00:37.609018 | orchestrator | ok: [testbed-node-5] 2025-05-29 01:00:37.609029 | orchestrator | 2025-05-29 01:00:37.609040 | orchestrator | TASK [ceph-facts : set_fact discovered_interpreter_python] ********************* 2025-05-29 01:00:37.609050 | orchestrator | Thursday 29 May 2025 00:58:31 +0000 (0:00:00.338) 0:00:05.787 ********** 2025-05-29 01:00:37.609061 | orchestrator | ok: [testbed-node-3] 2025-05-29 01:00:37.609072 | orchestrator | ok: [testbed-node-4] 2025-05-29 01:00:37.609082 | orchestrator | ok: [testbed-node-5] 2025-05-29 01:00:37.609093 | orchestrator | 2025-05-29 01:00:37.609104 | orchestrator | TASK [ceph-facts : set_fact discovered_interpreter_python if not previously set] *** 2025-05-29 01:00:37.609161 | orchestrator | Thursday 29 May 2025 00:58:31 +0000 (0:00:00.317) 0:00:06.105 ********** 2025-05-29 01:00:37.609184 | orchestrator | skipping: [testbed-node-3] 2025-05-29 01:00:37.609205 | orchestrator | skipping: [testbed-node-4] 2025-05-29 01:00:37.609223 | orchestrator | skipping: [testbed-node-5] 2025-05-29 01:00:37.609234 | orchestrator | 2025-05-29 01:00:37.609245 | orchestrator | TASK [ceph-facts : set_fact ceph_release ceph_stable_release] ****************** 2025-05-29 01:00:37.609256 | orchestrator | Thursday 29 May 2025 00:58:32 +0000 (0:00:00.528) 0:00:06.633 ********** 2025-05-29 01:00:37.609267 | orchestrator | ok: [testbed-node-3] 2025-05-29 01:00:37.609278 | orchestrator | ok: [testbed-node-4] 2025-05-29 01:00:37.609303 | orchestrator | ok: [testbed-node-5] 2025-05-29 01:00:37.609315 | orchestrator | 2025-05-29 01:00:37.609326 | orchestrator | TASK [ceph-facts : set_fact monitor_name ansible_facts['hostname']] ************ 2025-05-29 01:00:37.609337 | orchestrator | Thursday 29 May 2025 00:58:32 +0000 (0:00:00.309) 0:00:06.943 ********** 2025-05-29 01:00:37.609348 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-05-29 01:00:37.609359 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-29 01:00:37.609370 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-29 01:00:37.609381 | orchestrator | 2025-05-29 01:00:37.609391 | orchestrator | TASK [ceph-facts : set_fact container_exec_cmd] ******************************** 2025-05-29 01:00:37.609402 | orchestrator | Thursday 29 May 2025 00:58:33 +0000 (0:00:00.708) 0:00:07.652 ********** 2025-05-29 01:00:37.609413 | orchestrator | ok: [testbed-node-3] 2025-05-29 01:00:37.609424 | orchestrator | ok: [testbed-node-4] 2025-05-29 01:00:37.609435 | orchestrator | ok: [testbed-node-5] 2025-05-29 01:00:37.609446 | orchestrator | 2025-05-29 01:00:37.609457 | orchestrator | TASK [ceph-facts : find a running mon container] ******************************* 2025-05-29 01:00:37.609468 | orchestrator | Thursday 29 May 2025 00:58:33 +0000 (0:00:00.469) 0:00:08.121 ********** 2025-05-29 01:00:37.609529 | orchestrator | changed: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-05-29 01:00:37.609545 | orchestrator | changed: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-29 01:00:37.609558 | orchestrator | changed: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-29 01:00:37.609570 | orchestrator | 2025-05-29 01:00:37.609583 | orchestrator | TASK [ceph-facts : check for a ceph mon socket] ******************************** 2025-05-29 01:00:37.609597 | orchestrator | Thursday 29 May 2025 00:58:36 +0000 (0:00:02.423) 0:00:10.544 ********** 2025-05-29 01:00:37.609609 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-05-29 01:00:37.609632 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-05-29 01:00:37.609645 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-05-29 01:00:37.609657 | orchestrator | skipping: [testbed-node-3] 2025-05-29 01:00:37.609669 | orchestrator | 2025-05-29 01:00:37.609682 | orchestrator | TASK [ceph-facts : check if the ceph mon socket is in-use] ********************* 2025-05-29 01:00:37.609694 | orchestrator | Thursday 29 May 2025 00:58:36 +0000 (0:00:00.459) 0:00:11.004 ********** 2025-05-29 01:00:37.609709 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-05-29 01:00:37.609725 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-05-29 01:00:37.609739 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-05-29 01:00:37.609752 | orchestrator | skipping: [testbed-node-3] 2025-05-29 01:00:37.609765 | orchestrator | 2025-05-29 01:00:37.609778 | orchestrator | TASK [ceph-facts : set_fact running_mon - non_container] *********************** 2025-05-29 01:00:37.609791 | orchestrator | Thursday 29 May 2025 00:58:37 +0000 (0:00:00.639) 0:00:11.643 ********** 2025-05-29 01:00:37.609807 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-05-29 01:00:37.609824 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-05-29 01:00:37.609838 | orchestrator | skipping: [testbed-node-3] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-05-29 01:00:37.609850 | orchestrator | skipping: [testbed-node-3] 2025-05-29 01:00:37.609861 | orchestrator | 2025-05-29 01:00:37.609872 | orchestrator | TASK [ceph-facts : set_fact running_mon - container] *************************** 2025-05-29 01:00:37.609888 | orchestrator | Thursday 29 May 2025 00:58:37 +0000 (0:00:00.173) 0:00:11.816 ********** 2025-05-29 01:00:37.609902 | orchestrator | ok: [testbed-node-3] => (item={'changed': True, 'stdout': '27cac2c63622', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-05-29 00:58:34.583068', 'end': '2025-05-29 00:58:34.632537', 'delta': '0:00:00.049469', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['27cac2c63622'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2025-05-29 01:00:37.609937 | orchestrator | ok: [testbed-node-3] => (item={'changed': True, 'stdout': '3f53557b52db', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-05-29 00:58:35.226068', 'end': '2025-05-29 00:58:35.267336', 'delta': '0:00:00.041268', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['3f53557b52db'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2025-05-29 01:00:37.609950 | orchestrator | ok: [testbed-node-3] => (item={'changed': True, 'stdout': '06a206522b4c', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-05-29 00:58:35.784665', 'end': '2025-05-29 00:58:35.820953', 'delta': '0:00:00.036288', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['06a206522b4c'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2025-05-29 01:00:37.609962 | orchestrator | 2025-05-29 01:00:37.609973 | orchestrator | TASK [ceph-facts : set_fact _container_exec_cmd] ******************************* 2025-05-29 01:00:37.609984 | orchestrator | Thursday 29 May 2025 00:58:37 +0000 (0:00:00.232) 0:00:12.049 ********** 2025-05-29 01:00:37.609995 | orchestrator | ok: [testbed-node-3] 2025-05-29 01:00:37.610005 | orchestrator | ok: [testbed-node-4] 2025-05-29 01:00:37.610066 | orchestrator | ok: [testbed-node-5] 2025-05-29 01:00:37.610079 | orchestrator | 2025-05-29 01:00:37.610091 | orchestrator | TASK [ceph-facts : get current fsid if cluster is already running] ************* 2025-05-29 01:00:37.610102 | orchestrator | Thursday 29 May 2025 00:58:38 +0000 (0:00:00.563) 0:00:12.613 ********** 2025-05-29 01:00:37.610114 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2025-05-29 01:00:37.610188 | orchestrator | 2025-05-29 01:00:37.610199 | orchestrator | TASK [ceph-facts : set_fact current_fsid rc 1] ********************************* 2025-05-29 01:00:37.610210 | orchestrator | Thursday 29 May 2025 00:58:39 +0000 (0:00:01.416) 0:00:14.030 ********** 2025-05-29 01:00:37.610221 | orchestrator | skipping: [testbed-node-3] 2025-05-29 01:00:37.610232 | orchestrator | skipping: [testbed-node-4] 2025-05-29 01:00:37.610243 | orchestrator | skipping: [testbed-node-5] 2025-05-29 01:00:37.610254 | orchestrator | 2025-05-29 01:00:37.610265 | orchestrator | TASK [ceph-facts : get current fsid] ******************************************* 2025-05-29 01:00:37.610275 | orchestrator | Thursday 29 May 2025 00:58:40 +0000 (0:00:00.509) 0:00:14.539 ********** 2025-05-29 01:00:37.610286 | orchestrator | skipping: [testbed-node-3] 2025-05-29 01:00:37.610297 | orchestrator | skipping: [testbed-node-4] 2025-05-29 01:00:37.610308 | orchestrator | skipping: [testbed-node-5] 2025-05-29 01:00:37.610318 | orchestrator | 2025-05-29 01:00:37.610329 | orchestrator | TASK [ceph-facts : set_fact fsid] ********************************************** 2025-05-29 01:00:37.610340 | orchestrator | Thursday 29 May 2025 00:58:40 +0000 (0:00:00.526) 0:00:15.066 ********** 2025-05-29 01:00:37.610351 | orchestrator | skipping: [testbed-node-3] 2025-05-29 01:00:37.610362 | orchestrator | skipping: [testbed-node-4] 2025-05-29 01:00:37.610373 | orchestrator | skipping: [testbed-node-5] 2025-05-29 01:00:37.610383 | orchestrator | 2025-05-29 01:00:37.610394 | orchestrator | TASK [ceph-facts : set_fact fsid from current_fsid] **************************** 2025-05-29 01:00:37.610405 | orchestrator | Thursday 29 May 2025 00:58:40 +0000 (0:00:00.295) 0:00:15.361 ********** 2025-05-29 01:00:37.610416 | orchestrator | ok: [testbed-node-3] 2025-05-29 01:00:37.610426 | orchestrator | 2025-05-29 01:00:37.610437 | orchestrator | TASK [ceph-facts : generate cluster fsid] ************************************** 2025-05-29 01:00:37.610457 | orchestrator | Thursday 29 May 2025 00:58:41 +0000 (0:00:00.128) 0:00:15.490 ********** 2025-05-29 01:00:37.610468 | orchestrator | skipping: [testbed-node-3] 2025-05-29 01:00:37.610479 | orchestrator | 2025-05-29 01:00:37.610489 | orchestrator | TASK [ceph-facts : set_fact fsid] ********************************************** 2025-05-29 01:00:37.610500 | orchestrator | Thursday 29 May 2025 00:58:41 +0000 (0:00:00.246) 0:00:15.736 ********** 2025-05-29 01:00:37.610517 | orchestrator | skipping: [testbed-node-3] 2025-05-29 01:00:37.610529 | orchestrator | skipping: [testbed-node-4] 2025-05-29 01:00:37.610539 | orchestrator | skipping: [testbed-node-5] 2025-05-29 01:00:37.610550 | orchestrator | 2025-05-29 01:00:37.610560 | orchestrator | TASK [ceph-facts : resolve device link(s)] ************************************* 2025-05-29 01:00:37.610570 | orchestrator | Thursday 29 May 2025 00:58:41 +0000 (0:00:00.491) 0:00:16.228 ********** 2025-05-29 01:00:37.610579 | orchestrator | skipping: [testbed-node-3] 2025-05-29 01:00:37.610589 | orchestrator | skipping: [testbed-node-4] 2025-05-29 01:00:37.610598 | orchestrator | skipping: [testbed-node-5] 2025-05-29 01:00:37.610608 | orchestrator | 2025-05-29 01:00:37.610617 | orchestrator | TASK [ceph-facts : set_fact build devices from resolved symlinks] ************** 2025-05-29 01:00:37.610627 | orchestrator | Thursday 29 May 2025 00:58:42 +0000 (0:00:00.327) 0:00:16.555 ********** 2025-05-29 01:00:37.610636 | orchestrator | skipping: [testbed-node-3] 2025-05-29 01:00:37.610646 | orchestrator | skipping: [testbed-node-4] 2025-05-29 01:00:37.610655 | orchestrator | skipping: [testbed-node-5] 2025-05-29 01:00:37.610665 | orchestrator | 2025-05-29 01:00:37.610675 | orchestrator | TASK [ceph-facts : resolve dedicated_device link(s)] *************************** 2025-05-29 01:00:37.610684 | orchestrator | Thursday 29 May 2025 00:58:42 +0000 (0:00:00.345) 0:00:16.901 ********** 2025-05-29 01:00:37.610694 | orchestrator | skipping: [testbed-node-3] 2025-05-29 01:00:37.610704 | orchestrator | skipping: [testbed-node-4] 2025-05-29 01:00:37.610720 | orchestrator | skipping: [testbed-node-5] 2025-05-29 01:00:37.610730 | orchestrator | 2025-05-29 01:00:37.610740 | orchestrator | TASK [ceph-facts : set_fact build dedicated_devices from resolved symlinks] **** 2025-05-29 01:00:37.610750 | orchestrator | Thursday 29 May 2025 00:58:42 +0000 (0:00:00.363) 0:00:17.265 ********** 2025-05-29 01:00:37.610760 | orchestrator | skipping: [testbed-node-3] 2025-05-29 01:00:37.610770 | orchestrator | skipping: [testbed-node-4] 2025-05-29 01:00:37.610779 | orchestrator | skipping: [testbed-node-5] 2025-05-29 01:00:37.610789 | orchestrator | 2025-05-29 01:00:37.610798 | orchestrator | TASK [ceph-facts : resolve bluestore_wal_device link(s)] *********************** 2025-05-29 01:00:37.610808 | orchestrator | Thursday 29 May 2025 00:58:43 +0000 (0:00:00.588) 0:00:17.853 ********** 2025-05-29 01:00:37.610818 | orchestrator | skipping: [testbed-node-3] 2025-05-29 01:00:37.610827 | orchestrator | skipping: [testbed-node-4] 2025-05-29 01:00:37.610837 | orchestrator | skipping: [testbed-node-5] 2025-05-29 01:00:37.610846 | orchestrator | 2025-05-29 01:00:37.610856 | orchestrator | TASK [ceph-facts : set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-05-29 01:00:37.610866 | orchestrator | Thursday 29 May 2025 00:58:43 +0000 (0:00:00.326) 0:00:18.179 ********** 2025-05-29 01:00:37.610875 | orchestrator | skipping: [testbed-node-3] 2025-05-29 01:00:37.610885 | orchestrator | skipping: [testbed-node-4] 2025-05-29 01:00:37.610894 | orchestrator | skipping: [testbed-node-5] 2025-05-29 01:00:37.610904 | orchestrator | 2025-05-29 01:00:37.610913 | orchestrator | TASK [ceph-facts : set_fact devices generate device list when osd_auto_discovery] *** 2025-05-29 01:00:37.610923 | orchestrator | Thursday 29 May 2025 00:58:44 +0000 (0:00:00.358) 0:00:18.538 ********** 2025-05-29 01:00:37.610934 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--b02a0e5a--ac94--54a1--88a1--38ba26e145f6-osd--block--b02a0e5a--ac94--54a1--88a1--38ba26e145f6', 'dm-uuid-LVM-kFkfR2mg2uG0RdKoScCCsYXIzL1wUaDrnsW8OabwjzP4k0MKWfHuFtPoPX27hc2A'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-29 01:00:37.610953 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--81bd5020--0460--5411--80bb--35101e63cce8-osd--block--81bd5020--0460--5411--80bb--35101e63cce8', 'dm-uuid-LVM-ku1tZkcyLWSOzUUaMbnfRMIwJ6AfKUfx75lphOGtk86nN57fRqJNVrWLW44XINSF'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-29 01:00:37.610964 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-29 01:00:37.610974 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-29 01:00:37.610989 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-29 01:00:37.610999 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-29 01:00:37.611015 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-29 01:00:37.611026 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-29 01:00:37.611036 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-29 01:00:37.611046 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--2961dba5--5d3e--5262--aab3--a8717ef28b96-osd--block--2961dba5--5d3e--5262--aab3--a8717ef28b96', 'dm-uuid-LVM-90PxGUbVkc7IBnExigfQK6mIHuEE3fo2YCFDhVHi9nN3OmL4GdmB6KJkvKJRQ3Er'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-29 01:00:37.611063 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--10c8172d--d6a1--5b27--956e--8c5bc818fcb1-osd--block--10c8172d--d6a1--5b27--956e--8c5bc818fcb1', 'dm-uuid-LVM-u69ClKhH4KooD2hXW2P2vYFyL8r4YNPHuZb8s4q5kdBdnPfjgAVJ1FqcD3h80XjK'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-29 01:00:37.611073 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-29 01:00:37.611088 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-29 01:00:37.611109 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3e9d3af7-34b1-4fa5-b4a2-fbeb047fa155', 'scsi-SQEMU_QEMU_HARDDISK_3e9d3af7-34b1-4fa5-b4a2-fbeb047fa155'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3e9d3af7-34b1-4fa5-b4a2-fbeb047fa155-part1', 'scsi-SQEMU_QEMU_HARDDISK_3e9d3af7-34b1-4fa5-b4a2-fbeb047fa155-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3e9d3af7-34b1-4fa5-b4a2-fbeb047fa155-part14', 'scsi-SQEMU_QEMU_HARDDISK_3e9d3af7-34b1-4fa5-b4a2-fbeb047fa155-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3e9d3af7-34b1-4fa5-b4a2-fbeb047fa155-part15', 'scsi-SQEMU_QEMU_HARDDISK_3e9d3af7-34b1-4fa5-b4a2-fbeb047fa155-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_3e9d3af7-34b1-4fa5-b4a2-fbeb047fa155-part16', 'scsi-SQEMU_QEMU_HARDDISK_3e9d3af7-34b1-4fa5-b4a2-fbeb047fa155-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-29 01:00:37.611153 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-29 01:00:37.611164 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-29 01:00:37.611175 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdb', 'value': {'holders': ['ceph--b02a0e5a--ac94--54a1--88a1--38ba26e145f6-osd--block--b02a0e5a--ac94--54a1--88a1--38ba26e145f6'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-VE8tYJ-XxAf-GuWx-HiSz-BoTR-ozoD-esy6Od', 'scsi-0QEMU_QEMU_HARDDISK_172ad3b6-4b22-4cdf-a28e-ac5da2182fda', 'scsi-SQEMU_QEMU_HARDDISK_172ad3b6-4b22-4cdf-a28e-ac5da2182fda'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-29 01:00:37.611191 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-29 01:00:37.611202 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdc', 'value': {'holders': ['ceph--81bd5020--0460--5411--80bb--35101e63cce8-osd--block--81bd5020--0460--5411--80bb--35101e63cce8'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-0sGu9Q-DbTo-vXNO-bcWg-Jvqq-5Jh2-vInOeY', 'scsi-0QEMU_QEMU_HARDDISK_81c2fe1f-38cc-49f7-ae7d-3d898626253d', 'scsi-SQEMU_QEMU_HARDDISK_81c2fe1f-38cc-49f7-ae7d-3d898626253d'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-29 01:00:37.611220 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-29 01:00:37.611230 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-29 01:00:37.611240 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-29 01:00:37.611257 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_872f8c6a-38b8-4598-af69-d174e2488207', 'scsi-SQEMU_QEMU_HARDDISK_872f8c6a-38b8-4598-af69-d174e2488207'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-29 01:00:37.611268 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-29 01:00:37.611279 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-29-00-02-05-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-29 01:00:37.611289 | orchestrator | skipping: [testbed-node-3] 2025-05-29 01:00:37.611317 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c7ad4de3-4f57-4eb1-a9f0-bec4cfb4ae61', 'scsi-SQEMU_QEMU_HARDDISK_c7ad4de3-4f57-4eb1-a9f0-bec4cfb4ae61'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c7ad4de3-4f57-4eb1-a9f0-bec4cfb4ae61-part1', 'scsi-SQEMU_QEMU_HARDDISK_c7ad4de3-4f57-4eb1-a9f0-bec4cfb4ae61-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c7ad4de3-4f57-4eb1-a9f0-bec4cfb4ae61-part14', 'scsi-SQEMU_QEMU_HARDDISK_c7ad4de3-4f57-4eb1-a9f0-bec4cfb4ae61-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c7ad4de3-4f57-4eb1-a9f0-bec4cfb4ae61-part15', 'scsi-SQEMU_QEMU_HARDDISK_c7ad4de3-4f57-4eb1-a9f0-bec4cfb4ae61-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c7ad4de3-4f57-4eb1-a9f0-bec4cfb4ae61-part16', 'scsi-SQEMU_QEMU_HARDDISK_c7ad4de3-4f57-4eb1-a9f0-bec4cfb4ae61-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-29 01:00:37.611336 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-0', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--a1850b6b--a1b4--57b7--9f5e--deb9029890df-osd--block--a1850b6b--a1b4--57b7--9f5e--deb9029890df', 'dm-uuid-LVM-ffTB4yYFOzqyp9l6VNjtab7UyxHeVzSNRk5JG42cW1fCAxN71z7Fj9Ahix4r12LQ'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-29 01:00:37.611347 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdb', 'value': {'holders': ['ceph--2961dba5--5d3e--5262--aab3--a8717ef28b96-osd--block--2961dba5--5d3e--5262--aab3--a8717ef28b96'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-nJs0rF-5fyc-psQj-NDDF-8xau-LUmY-o2A3mw', 'scsi-0QEMU_QEMU_HARDDISK_d4d6d7dc-ffab-40f4-8a14-6defed4afc9f', 'scsi-SQEMU_QEMU_HARDDISK_d4d6d7dc-ffab-40f4-8a14-6defed4afc9f'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-29 01:00:37.611357 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'dm-1', 'value': {'holders': [], 'host': '', 'links': {'ids': ['dm-name-ceph--05ae814f--03ae--5777--aef4--91f0b0270e90-osd--block--05ae814f--03ae--5777--aef4--91f0b0270e90', 'dm-uuid-LVM-tkriDHC3ygW15DdnKwLVvbc47iLkDPk4WFWX9fhAHuNww6Kf6WAHtjisspirk1mO'], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': '', 'sectors': 41934848, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': None, 'virtual': 1}})  2025-05-29 01:00:37.611372 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdc', 'value': {'holders': ['ceph--10c8172d--d6a1--5b27--956e--8c5bc818fcb1-osd--block--10c8172d--d6a1--5b27--956e--8c5bc818fcb1'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-eSK637-oDdC-qeoY-pFAl-3PfZ-Ha94-Q8K5VH', 'scsi-0QEMU_QEMU_HARDDISK_ab52b3eb-0fd7-41fe-9d4d-bdc516081274', 'scsi-SQEMU_QEMU_HARDDISK_ab52b3eb-0fd7-41fe-9d4d-bdc516081274'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-29 01:00:37.611388 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_f7dbb189-5858-4eca-9499-fceb9ae8f8d2', 'scsi-SQEMU_QEMU_HARDDISK_f7dbb189-5858-4eca-9499-fceb9ae8f8d2'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': 2025-05-29 01:00:37 | INFO  | Task 3ebb5bb4-9a6f-4abc-97a4-594e45e8c174 is in state SUCCESS 2025-05-29 01:00:37.611399 | orchestrator | None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-29 01:00:37.611411 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-29 01:00:37.611428 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-29-00-02-01-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-29 01:00:37.611438 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-29 01:00:37.611448 | orchestrator | skipping: [testbed-node-4] 2025-05-29 01:00:37.611458 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-29 01:00:37.611469 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-29 01:00:37.611479 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-29 01:00:37.611493 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-29 01:00:37.611503 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-29 01:00:37.611519 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-29 01:00:37.611530 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_13985d86-b513-49a7-ae6a-0b62fccaa428', 'scsi-SQEMU_QEMU_HARDDISK_13985d86-b513-49a7-ae6a-0b62fccaa428'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_13985d86-b513-49a7-ae6a-0b62fccaa428-part1', 'scsi-SQEMU_QEMU_HARDDISK_13985d86-b513-49a7-ae6a-0b62fccaa428-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_13985d86-b513-49a7-ae6a-0b62fccaa428-part14', 'scsi-SQEMU_QEMU_HARDDISK_13985d86-b513-49a7-ae6a-0b62fccaa428-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_13985d86-b513-49a7-ae6a-0b62fccaa428-part15', 'scsi-SQEMU_QEMU_HARDDISK_13985d86-b513-49a7-ae6a-0b62fccaa428-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_13985d86-b513-49a7-ae6a-0b62fccaa428-part16', 'scsi-SQEMU_QEMU_HARDDISK_13985d86-b513-49a7-ae6a-0b62fccaa428-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-29 01:00:37.611547 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdb', 'value': {'holders': ['ceph--a1850b6b--a1b4--57b7--9f5e--deb9029890df-osd--block--a1850b6b--a1b4--57b7--9f5e--deb9029890df'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-a3AfDo-SWFj-jdr1-Im7o-k563-sVpW-78YTEC', 'scsi-0QEMU_QEMU_HARDDISK_baffed07-1ba6-4c69-bef3-fae49f76e29e', 'scsi-SQEMU_QEMU_HARDDISK_baffed07-1ba6-4c69-bef3-fae49f76e29e'], 'labels': [], 'masters': ['dm-0'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-29 01:00:37.611562 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdc', 'value': {'holders': ['ceph--05ae814f--03ae--5777--aef4--91f0b0270e90-osd--block--05ae814f--03ae--5777--aef4--91f0b0270e90'], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['lvm-pv-uuid-ugLbjG-hbub-KRlg-RZhO-5WRL-ezxt-RTsC3p', 'scsi-0QEMU_QEMU_HARDDISK_6be5e360-5fe4-4176-98be-0e33dc067da2', 'scsi-SQEMU_QEMU_HARDDISK_6be5e360-5fe4-4176-98be-0e33dc067da2'], 'labels': [], 'masters': ['dm-1'], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-29 01:00:37.611580 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sdd', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_c045ec7e-dfd2-45aa-a5da-e7ebbe64f976', 'scsi-SQEMU_QEMU_HARDDISK_c045ec7e-dfd2-45aa-a5da-e7ebbe64f976'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 41943040, 'sectorsize': '512', 'size': '20.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-29 01:00:37.611600 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-29-00-02-04-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-29 01:00:37.611610 | orchestrator | skipping: [testbed-node-5] 2025-05-29 01:00:37.611620 | orchestrator | 2025-05-29 01:00:37.611630 | orchestrator | TASK [ceph-facts : get ceph current status] ************************************ 2025-05-29 01:00:37.611640 | orchestrator | Thursday 29 May 2025 00:58:44 +0000 (0:00:00.605) 0:00:19.144 ********** 2025-05-29 01:00:37.611649 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] 2025-05-29 01:00:37.611659 | orchestrator | 2025-05-29 01:00:37.611669 | orchestrator | TASK [ceph-facts : set_fact ceph_current_status] ******************************* 2025-05-29 01:00:37.611679 | orchestrator | Thursday 29 May 2025 00:58:46 +0000 (0:00:01.457) 0:00:20.601 ********** 2025-05-29 01:00:37.611688 | orchestrator | ok: [testbed-node-3] 2025-05-29 01:00:37.611698 | orchestrator | 2025-05-29 01:00:37.611708 | orchestrator | TASK [ceph-facts : set_fact rgw_hostname] ************************************** 2025-05-29 01:00:37.611718 | orchestrator | Thursday 29 May 2025 00:58:46 +0000 (0:00:00.168) 0:00:20.769 ********** 2025-05-29 01:00:37.611727 | orchestrator | ok: [testbed-node-3] 2025-05-29 01:00:37.611737 | orchestrator | ok: [testbed-node-4] 2025-05-29 01:00:37.611746 | orchestrator | ok: [testbed-node-5] 2025-05-29 01:00:37.611756 | orchestrator | 2025-05-29 01:00:37.611766 | orchestrator | TASK [ceph-facts : check if the ceph conf exists] ****************************** 2025-05-29 01:00:37.611775 | orchestrator | Thursday 29 May 2025 00:58:46 +0000 (0:00:00.403) 0:00:21.173 ********** 2025-05-29 01:00:37.611785 | orchestrator | ok: [testbed-node-4] 2025-05-29 01:00:37.611795 | orchestrator | ok: [testbed-node-3] 2025-05-29 01:00:37.611804 | orchestrator | ok: [testbed-node-5] 2025-05-29 01:00:37.611814 | orchestrator | 2025-05-29 01:00:37.611824 | orchestrator | TASK [ceph-facts : set default osd_pool_default_crush_rule fact] *************** 2025-05-29 01:00:37.611833 | orchestrator | Thursday 29 May 2025 00:58:47 +0000 (0:00:00.670) 0:00:21.843 ********** 2025-05-29 01:00:37.611843 | orchestrator | ok: [testbed-node-3] 2025-05-29 01:00:37.611852 | orchestrator | ok: [testbed-node-4] 2025-05-29 01:00:37.611862 | orchestrator | ok: [testbed-node-5] 2025-05-29 01:00:37.611871 | orchestrator | 2025-05-29 01:00:37.611881 | orchestrator | TASK [ceph-facts : read osd pool default crush rule] *************************** 2025-05-29 01:00:37.611891 | orchestrator | Thursday 29 May 2025 00:58:47 +0000 (0:00:00.263) 0:00:22.107 ********** 2025-05-29 01:00:37.611900 | orchestrator | ok: [testbed-node-3] 2025-05-29 01:00:37.611910 | orchestrator | ok: [testbed-node-4] 2025-05-29 01:00:37.611920 | orchestrator | ok: [testbed-node-5] 2025-05-29 01:00:37.611929 | orchestrator | 2025-05-29 01:00:37.611939 | orchestrator | TASK [ceph-facts : set osd_pool_default_crush_rule fact] *********************** 2025-05-29 01:00:37.611949 | orchestrator | Thursday 29 May 2025 00:58:48 +0000 (0:00:00.766) 0:00:22.874 ********** 2025-05-29 01:00:37.611958 | orchestrator | skipping: [testbed-node-3] 2025-05-29 01:00:37.611968 | orchestrator | skipping: [testbed-node-4] 2025-05-29 01:00:37.611977 | orchestrator | skipping: [testbed-node-5] 2025-05-29 01:00:37.611987 | orchestrator | 2025-05-29 01:00:37.611997 | orchestrator | TASK [ceph-facts : read osd pool default crush rule] *************************** 2025-05-29 01:00:37.612007 | orchestrator | Thursday 29 May 2025 00:58:48 +0000 (0:00:00.274) 0:00:23.148 ********** 2025-05-29 01:00:37.612016 | orchestrator | skipping: [testbed-node-3] 2025-05-29 01:00:37.612026 | orchestrator | skipping: [testbed-node-4] 2025-05-29 01:00:37.612041 | orchestrator | skipping: [testbed-node-5] 2025-05-29 01:00:37.612050 | orchestrator | 2025-05-29 01:00:37.612060 | orchestrator | TASK [ceph-facts : set osd_pool_default_crush_rule fact] *********************** 2025-05-29 01:00:37.612069 | orchestrator | Thursday 29 May 2025 00:58:49 +0000 (0:00:00.405) 0:00:23.554 ********** 2025-05-29 01:00:37.612083 | orchestrator | skipping: [testbed-node-3] 2025-05-29 01:00:37.612093 | orchestrator | skipping: [testbed-node-4] 2025-05-29 01:00:37.612103 | orchestrator | skipping: [testbed-node-5] 2025-05-29 01:00:37.612112 | orchestrator | 2025-05-29 01:00:37.612142 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv4] *** 2025-05-29 01:00:37.612152 | orchestrator | Thursday 29 May 2025 00:58:49 +0000 (0:00:00.288) 0:00:23.842 ********** 2025-05-29 01:00:37.612162 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-05-29 01:00:37.612171 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-05-29 01:00:37.612181 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-05-29 01:00:37.612191 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-05-29 01:00:37.612200 | orchestrator | skipping: [testbed-node-3] 2025-05-29 01:00:37.612210 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-05-29 01:00:37.612219 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-05-29 01:00:37.612229 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-05-29 01:00:37.612238 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-05-29 01:00:37.612248 | orchestrator | skipping: [testbed-node-4] 2025-05-29 01:00:37.612257 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-05-29 01:00:37.612272 | orchestrator | skipping: [testbed-node-5] 2025-05-29 01:00:37.612282 | orchestrator | 2025-05-29 01:00:37.612292 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv6] *** 2025-05-29 01:00:37.612302 | orchestrator | Thursday 29 May 2025 00:58:50 +0000 (0:00:00.785) 0:00:24.628 ********** 2025-05-29 01:00:37.612311 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-05-29 01:00:37.612321 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-05-29 01:00:37.612331 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-05-29 01:00:37.612340 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-05-29 01:00:37.612350 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-05-29 01:00:37.612359 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-05-29 01:00:37.612368 | orchestrator | skipping: [testbed-node-3] 2025-05-29 01:00:37.612378 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-05-29 01:00:37.612387 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-05-29 01:00:37.612397 | orchestrator | skipping: [testbed-node-4] 2025-05-29 01:00:37.612406 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-05-29 01:00:37.612416 | orchestrator | skipping: [testbed-node-5] 2025-05-29 01:00:37.612425 | orchestrator | 2025-05-29 01:00:37.612435 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address] ************* 2025-05-29 01:00:37.612445 | orchestrator | Thursday 29 May 2025 00:58:50 +0000 (0:00:00.590) 0:00:25.218 ********** 2025-05-29 01:00:37.612454 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-0) 2025-05-29 01:00:37.612464 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-0) 2025-05-29 01:00:37.612473 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-0) 2025-05-29 01:00:37.612483 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-1) 2025-05-29 01:00:37.612492 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-1) 2025-05-29 01:00:37.612502 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-2) 2025-05-29 01:00:37.612511 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-1) 2025-05-29 01:00:37.612520 | orchestrator | ok: [testbed-node-4] => (item=testbed-node-2) 2025-05-29 01:00:37.612530 | orchestrator | ok: [testbed-node-5] => (item=testbed-node-2) 2025-05-29 01:00:37.612545 | orchestrator | 2025-05-29 01:00:37.612555 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_interface - ipv4] **** 2025-05-29 01:00:37.612565 | orchestrator | Thursday 29 May 2025 00:58:53 +0000 (0:00:02.614) 0:00:27.832 ********** 2025-05-29 01:00:37.612574 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-05-29 01:00:37.612584 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-05-29 01:00:37.612593 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-05-29 01:00:37.612603 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-05-29 01:00:37.612613 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-05-29 01:00:37.612622 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-05-29 01:00:37.612632 | orchestrator | skipping: [testbed-node-3] 2025-05-29 01:00:37.612642 | orchestrator | skipping: [testbed-node-4] 2025-05-29 01:00:37.612651 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-05-29 01:00:37.612661 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-05-29 01:00:37.612670 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-05-29 01:00:37.612680 | orchestrator | skipping: [testbed-node-5] 2025-05-29 01:00:37.612689 | orchestrator | 2025-05-29 01:00:37.612699 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_interface - ipv6] **** 2025-05-29 01:00:37.612708 | orchestrator | Thursday 29 May 2025 00:58:54 +0000 (0:00:00.743) 0:00:28.576 ********** 2025-05-29 01:00:37.612718 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-0)  2025-05-29 01:00:37.612728 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-1)  2025-05-29 01:00:37.612737 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-2)  2025-05-29 01:00:37.612746 | orchestrator | skipping: [testbed-node-3] 2025-05-29 01:00:37.612756 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-0)  2025-05-29 01:00:37.612765 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-1)  2025-05-29 01:00:37.612774 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-0)  2025-05-29 01:00:37.612784 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-1)  2025-05-29 01:00:37.612793 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-2)  2025-05-29 01:00:37.612803 | orchestrator | skipping: [testbed-node-4] 2025-05-29 01:00:37.612817 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-2)  2025-05-29 01:00:37.612827 | orchestrator | skipping: [testbed-node-5] 2025-05-29 01:00:37.612836 | orchestrator | 2025-05-29 01:00:37.612846 | orchestrator | TASK [ceph-facts : set_fact _current_monitor_address] ************************** 2025-05-29 01:00:37.612855 | orchestrator | Thursday 29 May 2025 00:58:54 +0000 (0:00:00.370) 0:00:28.947 ********** 2025-05-29 01:00:37.612865 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-05-29 01:00:37.612874 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-05-29 01:00:37.612884 | orchestrator | skipping: [testbed-node-3] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-05-29 01:00:37.612894 | orchestrator | skipping: [testbed-node-3] 2025-05-29 01:00:37.612903 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-05-29 01:00:37.612913 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-05-29 01:00:37.612923 | orchestrator | skipping: [testbed-node-4] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-05-29 01:00:37.612937 | orchestrator | skipping: [testbed-node-4] 2025-05-29 01:00:37.612947 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'})  2025-05-29 01:00:37.612956 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-05-29 01:00:37.612966 | orchestrator | skipping: [testbed-node-5] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-05-29 01:00:37.612981 | orchestrator | skipping: [testbed-node-5] 2025-05-29 01:00:37.612991 | orchestrator | 2025-05-29 01:00:37.613000 | orchestrator | TASK [ceph-facts : import_tasks set_radosgw_address.yml] *********************** 2025-05-29 01:00:37.613010 | orchestrator | Thursday 29 May 2025 00:58:54 +0000 (0:00:00.375) 0:00:29.323 ********** 2025-05-29 01:00:37.613020 | orchestrator | included: /ansible/roles/ceph-facts/tasks/set_radosgw_address.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-29 01:00:37.613029 | orchestrator | 2025-05-29 01:00:37.613039 | orchestrator | TASK [ceph-facts : set current radosgw_address_block, radosgw_address, radosgw_interface from node "{{ ceph_dashboard_call_item }}"] *** 2025-05-29 01:00:37.613049 | orchestrator | Thursday 29 May 2025 00:58:55 +0000 (0:00:00.758) 0:00:30.081 ********** 2025-05-29 01:00:37.613058 | orchestrator | skipping: [testbed-node-3] 2025-05-29 01:00:37.613068 | orchestrator | skipping: [testbed-node-4] 2025-05-29 01:00:37.613077 | orchestrator | skipping: [testbed-node-5] 2025-05-29 01:00:37.613087 | orchestrator | 2025-05-29 01:00:37.613097 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv4] **** 2025-05-29 01:00:37.613106 | orchestrator | Thursday 29 May 2025 00:58:56 +0000 (0:00:00.345) 0:00:30.427 ********** 2025-05-29 01:00:37.613166 | orchestrator | skipping: [testbed-node-3] 2025-05-29 01:00:37.613187 | orchestrator | skipping: [testbed-node-4] 2025-05-29 01:00:37.613204 | orchestrator | skipping: [testbed-node-5] 2025-05-29 01:00:37.613214 | orchestrator | 2025-05-29 01:00:37.613224 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address_block ipv6] **** 2025-05-29 01:00:37.613234 | orchestrator | Thursday 29 May 2025 00:58:56 +0000 (0:00:00.356) 0:00:30.784 ********** 2025-05-29 01:00:37.613244 | orchestrator | skipping: [testbed-node-3] 2025-05-29 01:00:37.613253 | orchestrator | skipping: [testbed-node-4] 2025-05-29 01:00:37.613263 | orchestrator | skipping: [testbed-node-5] 2025-05-29 01:00:37.613273 | orchestrator | 2025-05-29 01:00:37.613282 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_address] *************** 2025-05-29 01:00:37.613292 | orchestrator | Thursday 29 May 2025 00:58:56 +0000 (0:00:00.302) 0:00:31.086 ********** 2025-05-29 01:00:37.613302 | orchestrator | ok: [testbed-node-3] 2025-05-29 01:00:37.613311 | orchestrator | ok: [testbed-node-4] 2025-05-29 01:00:37.613321 | orchestrator | ok: [testbed-node-5] 2025-05-29 01:00:37.613331 | orchestrator | 2025-05-29 01:00:37.613340 | orchestrator | TASK [ceph-facts : set_fact _interface] **************************************** 2025-05-29 01:00:37.613350 | orchestrator | Thursday 29 May 2025 00:58:57 +0000 (0:00:00.632) 0:00:31.719 ********** 2025-05-29 01:00:37.613360 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-29 01:00:37.613369 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-29 01:00:37.613379 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-29 01:00:37.613389 | orchestrator | skipping: [testbed-node-3] 2025-05-29 01:00:37.613398 | orchestrator | 2025-05-29 01:00:37.613408 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv4] ****** 2025-05-29 01:00:37.613418 | orchestrator | Thursday 29 May 2025 00:58:57 +0000 (0:00:00.315) 0:00:32.034 ********** 2025-05-29 01:00:37.613427 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-29 01:00:37.613437 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-29 01:00:37.613447 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-29 01:00:37.613457 | orchestrator | skipping: [testbed-node-3] 2025-05-29 01:00:37.613466 | orchestrator | 2025-05-29 01:00:37.613476 | orchestrator | TASK [ceph-facts : set_fact _radosgw_address to radosgw_interface - ipv6] ****** 2025-05-29 01:00:37.613486 | orchestrator | Thursday 29 May 2025 00:58:57 +0000 (0:00:00.368) 0:00:32.403 ********** 2025-05-29 01:00:37.613496 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-29 01:00:37.613505 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-29 01:00:37.613515 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-29 01:00:37.613531 | orchestrator | skipping: [testbed-node-3] 2025-05-29 01:00:37.613541 | orchestrator | 2025-05-29 01:00:37.613551 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-29 01:00:37.613561 | orchestrator | Thursday 29 May 2025 00:58:58 +0000 (0:00:00.356) 0:00:32.760 ********** 2025-05-29 01:00:37.613570 | orchestrator | ok: [testbed-node-3] 2025-05-29 01:00:37.613585 | orchestrator | ok: [testbed-node-4] 2025-05-29 01:00:37.613596 | orchestrator | ok: [testbed-node-5] 2025-05-29 01:00:37.613605 | orchestrator | 2025-05-29 01:00:37.613615 | orchestrator | TASK [ceph-facts : set_fact rgw_instances without rgw multisite] *************** 2025-05-29 01:00:37.613624 | orchestrator | Thursday 29 May 2025 00:58:58 +0000 (0:00:00.291) 0:00:33.051 ********** 2025-05-29 01:00:37.613634 | orchestrator | ok: [testbed-node-3] => (item=0) 2025-05-29 01:00:37.613644 | orchestrator | ok: [testbed-node-4] => (item=0) 2025-05-29 01:00:37.613653 | orchestrator | ok: [testbed-node-5] => (item=0) 2025-05-29 01:00:37.613663 | orchestrator | 2025-05-29 01:00:37.613673 | orchestrator | TASK [ceph-facts : set_fact is_rgw_instances_defined] ************************** 2025-05-29 01:00:37.613682 | orchestrator | Thursday 29 May 2025 00:58:59 +0000 (0:00:00.668) 0:00:33.720 ********** 2025-05-29 01:00:37.613691 | orchestrator | skipping: [testbed-node-3] 2025-05-29 01:00:37.613699 | orchestrator | skipping: [testbed-node-4] 2025-05-29 01:00:37.613707 | orchestrator | skipping: [testbed-node-5] 2025-05-29 01:00:37.613715 | orchestrator | 2025-05-29 01:00:37.613723 | orchestrator | TASK [ceph-facts : reset rgw_instances (workaround)] *************************** 2025-05-29 01:00:37.613731 | orchestrator | Thursday 29 May 2025 00:58:59 +0000 (0:00:00.438) 0:00:34.158 ********** 2025-05-29 01:00:37.613739 | orchestrator | skipping: [testbed-node-3] 2025-05-29 01:00:37.613747 | orchestrator | skipping: [testbed-node-4] 2025-05-29 01:00:37.613754 | orchestrator | skipping: [testbed-node-5] 2025-05-29 01:00:37.613762 | orchestrator | 2025-05-29 01:00:37.613776 | orchestrator | TASK [ceph-facts : set_fact rgw_instances with rgw multisite] ****************** 2025-05-29 01:00:37.613784 | orchestrator | Thursday 29 May 2025 00:58:59 +0000 (0:00:00.244) 0:00:34.403 ********** 2025-05-29 01:00:37.613793 | orchestrator | skipping: [testbed-node-3] => (item=0)  2025-05-29 01:00:37.613800 | orchestrator | skipping: [testbed-node-3] 2025-05-29 01:00:37.613808 | orchestrator | skipping: [testbed-node-4] => (item=0)  2025-05-29 01:00:37.613816 | orchestrator | skipping: [testbed-node-4] 2025-05-29 01:00:37.613824 | orchestrator | skipping: [testbed-node-5] => (item=0)  2025-05-29 01:00:37.613832 | orchestrator | skipping: [testbed-node-5] 2025-05-29 01:00:37.613840 | orchestrator | 2025-05-29 01:00:37.613847 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_host] ******************************** 2025-05-29 01:00:37.613855 | orchestrator | Thursday 29 May 2025 00:59:00 +0000 (0:00:00.440) 0:00:34.844 ********** 2025-05-29 01:00:37.613863 | orchestrator | skipping: [testbed-node-3] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.13', 'radosgw_frontend_port': 8081})  2025-05-29 01:00:37.613871 | orchestrator | skipping: [testbed-node-3] 2025-05-29 01:00:37.613879 | orchestrator | skipping: [testbed-node-4] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.14', 'radosgw_frontend_port': 8081})  2025-05-29 01:00:37.613887 | orchestrator | skipping: [testbed-node-4] 2025-05-29 01:00:37.613895 | orchestrator | skipping: [testbed-node-5] => (item={'instance_name': 'rgw0', 'radosgw_address': '192.168.16.15', 'radosgw_frontend_port': 8081})  2025-05-29 01:00:37.613903 | orchestrator | skipping: [testbed-node-5] 2025-05-29 01:00:37.613910 | orchestrator | 2025-05-29 01:00:37.613918 | orchestrator | TASK [ceph-facts : set_fact rgw_instances_all] ********************************* 2025-05-29 01:00:37.613926 | orchestrator | Thursday 29 May 2025 00:59:00 +0000 (0:00:00.404) 0:00:35.249 ********** 2025-05-29 01:00:37.613934 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-3)  2025-05-29 01:00:37.613942 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-3)  2025-05-29 01:00:37.613950 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-4)  2025-05-29 01:00:37.613958 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-3)  2025-05-29 01:00:37.613973 | orchestrator | skipping: [testbed-node-3] => (item=testbed-node-5)  2025-05-29 01:00:37.613981 | orchestrator | skipping: [testbed-node-3] 2025-05-29 01:00:37.613989 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-4)  2025-05-29 01:00:37.613997 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-4)  2025-05-29 01:00:37.614004 | orchestrator | skipping: [testbed-node-5] => (item=testbed-node-5)  2025-05-29 01:00:37.614012 | orchestrator | skipping: [testbed-node-5] 2025-05-29 01:00:37.614044 | orchestrator | skipping: [testbed-node-4] => (item=testbed-node-5)  2025-05-29 01:00:37.614053 | orchestrator | skipping: [testbed-node-4] 2025-05-29 01:00:37.614060 | orchestrator | 2025-05-29 01:00:37.614069 | orchestrator | TASK [ceph-facts : set_fact use_new_ceph_iscsi package or old ceph-iscsi-config/cli] *** 2025-05-29 01:00:37.614077 | orchestrator | Thursday 29 May 2025 00:59:01 +0000 (0:00:00.674) 0:00:35.923 ********** 2025-05-29 01:00:37.614084 | orchestrator | skipping: [testbed-node-3] 2025-05-29 01:00:37.614092 | orchestrator | skipping: [testbed-node-4] 2025-05-29 01:00:37.614100 | orchestrator | skipping: [testbed-node-5] 2025-05-29 01:00:37.614108 | orchestrator | 2025-05-29 01:00:37.614134 | orchestrator | TASK [ceph-facts : set_fact ceph_run_cmd] ************************************** 2025-05-29 01:00:37.614143 | orchestrator | Thursday 29 May 2025 00:59:01 +0000 (0:00:00.227) 0:00:36.150 ********** 2025-05-29 01:00:37.614152 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-05-29 01:00:37.614160 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-29 01:00:37.614168 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-29 01:00:37.614176 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-05-29 01:00:37.614184 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-05-29 01:00:37.614192 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-05-29 01:00:37.614200 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-05-29 01:00:37.614208 | orchestrator | 2025-05-29 01:00:37.614216 | orchestrator | TASK [ceph-facts : set_fact ceph_admin_command] ******************************** 2025-05-29 01:00:37.614224 | orchestrator | Thursday 29 May 2025 00:59:02 +0000 (0:00:00.827) 0:00:36.978 ********** 2025-05-29 01:00:37.614236 | orchestrator | ok: [testbed-node-3 -> testbed-node-0(192.168.16.10)] => (item=testbed-node-0) 2025-05-29 01:00:37.614244 | orchestrator | ok: [testbed-node-3 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-29 01:00:37.614252 | orchestrator | ok: [testbed-node-3 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-29 01:00:37.614260 | orchestrator | ok: [testbed-node-3] => (item=testbed-node-3) 2025-05-29 01:00:37.614268 | orchestrator | ok: [testbed-node-3 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-05-29 01:00:37.614276 | orchestrator | ok: [testbed-node-3 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-05-29 01:00:37.614284 | orchestrator | ok: [testbed-node-3 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-05-29 01:00:37.614292 | orchestrator | 2025-05-29 01:00:37.614300 | orchestrator | TASK [Include tasks from the ceph-osd role] ************************************ 2025-05-29 01:00:37.614308 | orchestrator | Thursday 29 May 2025 00:59:04 +0000 (0:00:01.563) 0:00:38.541 ********** 2025-05-29 01:00:37.614316 | orchestrator | skipping: [testbed-node-3] 2025-05-29 01:00:37.614324 | orchestrator | skipping: [testbed-node-4] 2025-05-29 01:00:37.614332 | orchestrator | included: /ansible/tasks/openstack_config.yml for testbed-node-5 2025-05-29 01:00:37.614340 | orchestrator | 2025-05-29 01:00:37.614354 | orchestrator | TASK [create openstack pool(s)] ************************************************ 2025-05-29 01:00:37.614362 | orchestrator | Thursday 29 May 2025 00:59:04 +0000 (0:00:00.479) 0:00:39.020 ********** 2025-05-29 01:00:37.614372 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'backups', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-05-29 01:00:37.614387 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'volumes', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-05-29 01:00:37.614395 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'images', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-05-29 01:00:37.614403 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'metrics', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-05-29 01:00:37.614411 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item={'application': 'rbd', 'erasure_profile': '', 'expected_num_objects': '', 'min_size': 0, 'name': 'vms', 'pg_autoscale_mode': False, 'pg_num': 32, 'pgp_num': 32, 'rule_name': 'replicated_rule', 'size': 3, 'type': 1}) 2025-05-29 01:00:37.614420 | orchestrator | 2025-05-29 01:00:37.614427 | orchestrator | TASK [generate keys] *********************************************************** 2025-05-29 01:00:37.614435 | orchestrator | Thursday 29 May 2025 00:59:46 +0000 (0:00:42.013) 0:01:21.034 ********** 2025-05-29 01:00:37.614443 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-29 01:00:37.614451 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-29 01:00:37.614459 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-29 01:00:37.614466 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-29 01:00:37.614474 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-29 01:00:37.614482 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-29 01:00:37.614490 | orchestrator | changed: [testbed-node-5 -> {{ groups[mon_group_name][0] }}] 2025-05-29 01:00:37.614498 | orchestrator | 2025-05-29 01:00:37.614505 | orchestrator | TASK [get keys from monitors] ************************************************** 2025-05-29 01:00:37.614513 | orchestrator | Thursday 29 May 2025 01:00:06 +0000 (0:00:20.195) 0:01:41.230 ********** 2025-05-29 01:00:37.614521 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-29 01:00:37.614529 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-29 01:00:37.614536 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-29 01:00:37.614544 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-29 01:00:37.614552 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-29 01:00:37.614560 | orchestrator | ok: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-29 01:00:37.614568 | orchestrator | ok: [testbed-node-5 -> {{ groups.get(mon_group_name)[0] }}] 2025-05-29 01:00:37.614575 | orchestrator | 2025-05-29 01:00:37.614587 | orchestrator | TASK [copy ceph key(s) if needed] ********************************************** 2025-05-29 01:00:37.614595 | orchestrator | Thursday 29 May 2025 01:00:16 +0000 (0:00:09.798) 0:01:51.028 ********** 2025-05-29 01:00:37.614603 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-29 01:00:37.614611 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-05-29 01:00:37.614624 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-05-29 01:00:37.614631 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-29 01:00:37.614639 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-05-29 01:00:37.614647 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-05-29 01:00:37.614655 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-29 01:00:37.614663 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-05-29 01:00:37.614670 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-05-29 01:00:37.614683 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-29 01:00:37.614691 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-05-29 01:00:37.614699 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-05-29 01:00:37.614707 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-29 01:00:37.614715 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-05-29 01:00:37.614723 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-05-29 01:00:37.614730 | orchestrator | changed: [testbed-node-5 -> testbed-node-0(192.168.16.10)] => (item=None) 2025-05-29 01:00:37.614738 | orchestrator | changed: [testbed-node-5 -> testbed-node-1(192.168.16.11)] => (item=None) 2025-05-29 01:00:37.614746 | orchestrator | changed: [testbed-node-5 -> testbed-node-2(192.168.16.12)] => (item=None) 2025-05-29 01:00:37.614754 | orchestrator | changed: [testbed-node-5 -> {{ item.1 }}] 2025-05-29 01:00:37.614761 | orchestrator | 2025-05-29 01:00:37.614769 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-29 01:00:37.614777 | orchestrator | testbed-node-3 : ok=30  changed=2  unreachable=0 failed=0 skipped=37  rescued=0 ignored=0 2025-05-29 01:00:37.614785 | orchestrator | testbed-node-4 : ok=20  changed=0 unreachable=0 failed=0 skipped=30  rescued=0 ignored=0 2025-05-29 01:00:37.614793 | orchestrator | testbed-node-5 : ok=25  changed=3  unreachable=0 failed=0 skipped=29  rescued=0 ignored=0 2025-05-29 01:00:37.614801 | orchestrator | 2025-05-29 01:00:37.614809 | orchestrator | 2025-05-29 01:00:37.614817 | orchestrator | 2025-05-29 01:00:37.614825 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-29 01:00:37.614833 | orchestrator | Thursday 29 May 2025 01:00:34 +0000 (0:00:18.205) 0:02:09.234 ********** 2025-05-29 01:00:37.614841 | orchestrator | =============================================================================== 2025-05-29 01:00:37.614848 | orchestrator | create openstack pool(s) ----------------------------------------------- 42.01s 2025-05-29 01:00:37.614856 | orchestrator | generate keys ---------------------------------------------------------- 20.20s 2025-05-29 01:00:37.614864 | orchestrator | copy ceph key(s) if needed --------------------------------------------- 18.21s 2025-05-29 01:00:37.614872 | orchestrator | get keys from monitors -------------------------------------------------- 9.80s 2025-05-29 01:00:37.614879 | orchestrator | ceph-facts : set_fact _monitor_addresses to monitor_address ------------- 2.61s 2025-05-29 01:00:37.614887 | orchestrator | ceph-facts : find a running mon container ------------------------------- 2.42s 2025-05-29 01:00:37.614895 | orchestrator | ceph-facts : set_fact ceph_admin_command -------------------------------- 1.56s 2025-05-29 01:00:37.614902 | orchestrator | ceph-facts : get ceph current status ------------------------------------ 1.46s 2025-05-29 01:00:37.614910 | orchestrator | ceph-facts : get current fsid if cluster is already running ------------- 1.42s 2025-05-29 01:00:37.614918 | orchestrator | ceph-facts : check if podman binary is present -------------------------- 0.85s 2025-05-29 01:00:37.614930 | orchestrator | ceph-facts : set_fact ceph_run_cmd -------------------------------------- 0.83s 2025-05-29 01:00:37.614938 | orchestrator | ceph-facts : convert grafana-server group name if exist ----------------- 0.81s 2025-05-29 01:00:37.614946 | orchestrator | ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv4 --- 0.79s 2025-05-29 01:00:37.614954 | orchestrator | ceph-facts : read osd pool default crush rule --------------------------- 0.77s 2025-05-29 01:00:37.614962 | orchestrator | ceph-facts : import_tasks set_radosgw_address.yml ----------------------- 0.76s 2025-05-29 01:00:37.614969 | orchestrator | ceph-facts : check if it is atomic host --------------------------------- 0.74s 2025-05-29 01:00:37.614977 | orchestrator | ceph-facts : set_fact _monitor_addresses to monitor_interface - ipv4 ---- 0.74s 2025-05-29 01:00:37.614985 | orchestrator | ceph-facts : include facts.yml ------------------------------------------ 0.73s 2025-05-29 01:00:37.614993 | orchestrator | ceph-facts : set_fact monitor_name ansible_facts['hostname'] ------------ 0.71s 2025-05-29 01:00:37.615000 | orchestrator | ceph-facts : set_fact rgw_instances_all --------------------------------- 0.67s 2025-05-29 01:00:37.615012 | orchestrator | 2025-05-29 01:00:37 | INFO  | Task 3b52cf61-14bc-4b8a-b709-911dcbb43beb is in state STARTED 2025-05-29 01:00:37.615020 | orchestrator | 2025-05-29 01:00:37 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:00:37.615028 | orchestrator | 2025-05-29 01:00:37 | INFO  | Task 33e237ae-9122-4c0f-bab8-04edc8d678d8 is in state STARTED 2025-05-29 01:00:37.615036 | orchestrator | 2025-05-29 01:00:37 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:00:40.667495 | orchestrator | 2025-05-29 01:00:40 | INFO  | Task 3b52cf61-14bc-4b8a-b709-911dcbb43beb is in state STARTED 2025-05-29 01:00:40.670866 | orchestrator | 2025-05-29 01:00:40 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:00:40.672902 | orchestrator | 2025-05-29 01:00:40 | INFO  | Task 33e237ae-9122-4c0f-bab8-04edc8d678d8 is in state STARTED 2025-05-29 01:00:40.672939 | orchestrator | 2025-05-29 01:00:40 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:00:43.724532 | orchestrator | 2025-05-29 01:00:43 | INFO  | Task 3b52cf61-14bc-4b8a-b709-911dcbb43beb is in state STARTED 2025-05-29 01:00:43.726109 | orchestrator | 2025-05-29 01:00:43 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:00:43.728204 | orchestrator | 2025-05-29 01:00:43 | INFO  | Task 33e237ae-9122-4c0f-bab8-04edc8d678d8 is in state STARTED 2025-05-29 01:00:43.728524 | orchestrator | 2025-05-29 01:00:43 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:00:46.789350 | orchestrator | 2025-05-29 01:00:46 | INFO  | Task 880a1740-15c7-4628-8023-2046f78b4e3a is in state STARTED 2025-05-29 01:00:46.791468 | orchestrator | 2025-05-29 01:00:46 | INFO  | Task 3b52cf61-14bc-4b8a-b709-911dcbb43beb is in state STARTED 2025-05-29 01:00:46.794806 | orchestrator | 2025-05-29 01:00:46 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:00:46.796403 | orchestrator | 2025-05-29 01:00:46 | INFO  | Task 33e237ae-9122-4c0f-bab8-04edc8d678d8 is in state STARTED 2025-05-29 01:00:46.796576 | orchestrator | 2025-05-29 01:00:46 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:00:49.840409 | orchestrator | 2025-05-29 01:00:49 | INFO  | Task 880a1740-15c7-4628-8023-2046f78b4e3a is in state STARTED 2025-05-29 01:00:49.841311 | orchestrator | 2025-05-29 01:00:49 | INFO  | Task 3b52cf61-14bc-4b8a-b709-911dcbb43beb is in state STARTED 2025-05-29 01:00:49.842767 | orchestrator | 2025-05-29 01:00:49 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:00:49.844029 | orchestrator | 2025-05-29 01:00:49 | INFO  | Task 33e237ae-9122-4c0f-bab8-04edc8d678d8 is in state STARTED 2025-05-29 01:00:49.844080 | orchestrator | 2025-05-29 01:00:49 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:00:52.896035 | orchestrator | 2025-05-29 01:00:52 | INFO  | Task 880a1740-15c7-4628-8023-2046f78b4e3a is in state STARTED 2025-05-29 01:00:52.896715 | orchestrator | 2025-05-29 01:00:52 | INFO  | Task 3b52cf61-14bc-4b8a-b709-911dcbb43beb is in state STARTED 2025-05-29 01:00:52.898672 | orchestrator | 2025-05-29 01:00:52 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:00:52.899237 | orchestrator | 2025-05-29 01:00:52 | INFO  | Task 33e237ae-9122-4c0f-bab8-04edc8d678d8 is in state STARTED 2025-05-29 01:00:52.899524 | orchestrator | 2025-05-29 01:00:52 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:00:55.935827 | orchestrator | 2025-05-29 01:00:55 | INFO  | Task 880a1740-15c7-4628-8023-2046f78b4e3a is in state STARTED 2025-05-29 01:00:55.936068 | orchestrator | 2025-05-29 01:00:55 | INFO  | Task 3b52cf61-14bc-4b8a-b709-911dcbb43beb is in state STARTED 2025-05-29 01:00:55.936639 | orchestrator | 2025-05-29 01:00:55 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:00:55.937337 | orchestrator | 2025-05-29 01:00:55 | INFO  | Task 33e237ae-9122-4c0f-bab8-04edc8d678d8 is in state STARTED 2025-05-29 01:00:55.937568 | orchestrator | 2025-05-29 01:00:55 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:00:58.985457 | orchestrator | 2025-05-29 01:00:58 | INFO  | Task 880a1740-15c7-4628-8023-2046f78b4e3a is in state STARTED 2025-05-29 01:00:58.986075 | orchestrator | 2025-05-29 01:00:58 | INFO  | Task 3b52cf61-14bc-4b8a-b709-911dcbb43beb is in state STARTED 2025-05-29 01:00:58.987555 | orchestrator | 2025-05-29 01:00:58 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:00:58.988945 | orchestrator | 2025-05-29 01:00:58 | INFO  | Task 33e237ae-9122-4c0f-bab8-04edc8d678d8 is in state STARTED 2025-05-29 01:00:58.989041 | orchestrator | 2025-05-29 01:00:58 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:01:02.038138 | orchestrator | 2025-05-29 01:01:02 | INFO  | Task 880a1740-15c7-4628-8023-2046f78b4e3a is in state STARTED 2025-05-29 01:01:02.040088 | orchestrator | 2025-05-29 01:01:02 | INFO  | Task 3b52cf61-14bc-4b8a-b709-911dcbb43beb is in state STARTED 2025-05-29 01:01:02.041021 | orchestrator | 2025-05-29 01:01:02 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:01:02.042805 | orchestrator | 2025-05-29 01:01:02 | INFO  | Task 33e237ae-9122-4c0f-bab8-04edc8d678d8 is in state STARTED 2025-05-29 01:01:02.042839 | orchestrator | 2025-05-29 01:01:02 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:01:05.092771 | orchestrator | 2025-05-29 01:01:05 | INFO  | Task 880a1740-15c7-4628-8023-2046f78b4e3a is in state STARTED 2025-05-29 01:01:05.094353 | orchestrator | 2025-05-29 01:01:05 | INFO  | Task 3b52cf61-14bc-4b8a-b709-911dcbb43beb is in state STARTED 2025-05-29 01:01:05.097389 | orchestrator | 2025-05-29 01:01:05 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:01:05.098712 | orchestrator | 2025-05-29 01:01:05 | INFO  | Task 33e237ae-9122-4c0f-bab8-04edc8d678d8 is in state STARTED 2025-05-29 01:01:05.098752 | orchestrator | 2025-05-29 01:01:05 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:01:08.151871 | orchestrator | 2025-05-29 01:01:08 | INFO  | Task 880a1740-15c7-4628-8023-2046f78b4e3a is in state STARTED 2025-05-29 01:01:08.153966 | orchestrator | 2025-05-29 01:01:08 | INFO  | Task 3b52cf61-14bc-4b8a-b709-911dcbb43beb is in state STARTED 2025-05-29 01:01:08.155927 | orchestrator | 2025-05-29 01:01:08 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:01:08.157031 | orchestrator | 2025-05-29 01:01:08 | INFO  | Task 33e237ae-9122-4c0f-bab8-04edc8d678d8 is in state STARTED 2025-05-29 01:01:08.157063 | orchestrator | 2025-05-29 01:01:08 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:01:11.210485 | orchestrator | 2025-05-29 01:01:11 | INFO  | Task 880a1740-15c7-4628-8023-2046f78b4e3a is in state STARTED 2025-05-29 01:01:11.211234 | orchestrator | 2025-05-29 01:01:11 | INFO  | Task 3b52cf61-14bc-4b8a-b709-911dcbb43beb is in state STARTED 2025-05-29 01:01:11.213787 | orchestrator | 2025-05-29 01:01:11 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:01:11.215144 | orchestrator | 2025-05-29 01:01:11 | INFO  | Task 33e237ae-9122-4c0f-bab8-04edc8d678d8 is in state STARTED 2025-05-29 01:01:11.215960 | orchestrator | 2025-05-29 01:01:11 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:01:14.258495 | orchestrator | 2025-05-29 01:01:14 | INFO  | Task e577f5df-e622-48fb-812d-b74ae9042bd6 is in state STARTED 2025-05-29 01:01:14.259288 | orchestrator | 2025-05-29 01:01:14 | INFO  | Task dc61d1c4-d63c-4661-bb18-14b1db4b37b8 is in state STARTED 2025-05-29 01:01:14.260864 | orchestrator | 2025-05-29 01:01:14 | INFO  | Task 880a1740-15c7-4628-8023-2046f78b4e3a is in state STARTED 2025-05-29 01:01:14.265201 | orchestrator | 2025-05-29 01:01:14 | INFO  | Task 714be9b6-46c8-4528-8b44-60523f1aad38 is in state STARTED 2025-05-29 01:01:14.267468 | orchestrator | 2025-05-29 01:01:14 | INFO  | Task 6186519a-80c7-4062-9cf9-1e1244758c56 is in state STARTED 2025-05-29 01:01:14.270540 | orchestrator | 2025-05-29 01:01:14 | INFO  | Task 3b52cf61-14bc-4b8a-b709-911dcbb43beb is in state SUCCESS 2025-05-29 01:01:14.271182 | orchestrator | 2025-05-29 01:01:14.273217 | orchestrator | 2025-05-29 01:01:14.273273 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-29 01:01:14.273294 | orchestrator | 2025-05-29 01:01:14.273314 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-29 01:01:14.273333 | orchestrator | Thursday 29 May 2025 00:58:36 +0000 (0:00:00.382) 0:00:00.382 ********** 2025-05-29 01:01:14.273352 | orchestrator | ok: [testbed-node-0] 2025-05-29 01:01:14.273372 | orchestrator | ok: [testbed-node-1] 2025-05-29 01:01:14.273384 | orchestrator | ok: [testbed-node-2] 2025-05-29 01:01:14.273395 | orchestrator | 2025-05-29 01:01:14.273406 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-29 01:01:14.273417 | orchestrator | Thursday 29 May 2025 00:58:36 +0000 (0:00:00.477) 0:00:00.860 ********** 2025-05-29 01:01:14.273428 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-05-29 01:01:14.273440 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-05-29 01:01:14.273451 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-05-29 01:01:14.273461 | orchestrator | 2025-05-29 01:01:14.273473 | orchestrator | PLAY [Apply role keystone] ***************************************************** 2025-05-29 01:01:14.273484 | orchestrator | 2025-05-29 01:01:14.273495 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-05-29 01:01:14.273523 | orchestrator | Thursday 29 May 2025 00:58:36 +0000 (0:00:00.315) 0:00:01.176 ********** 2025-05-29 01:01:14.273535 | orchestrator | included: /ansible/roles/keystone/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-29 01:01:14.273547 | orchestrator | 2025-05-29 01:01:14.273558 | orchestrator | TASK [keystone : Ensuring config directories exist] **************************** 2025-05-29 01:01:14.273569 | orchestrator | Thursday 29 May 2025 00:58:37 +0000 (0:00:00.929) 0:00:02.105 ********** 2025-05-29 01:01:14.273586 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-29 01:01:14.273624 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-29 01:01:14.273654 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-29 01:01:14.273668 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-29 01:01:14.273687 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-29 01:01:14.274336 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-29 01:01:14.274374 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-29 01:01:14.274393 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-29 01:01:14.274413 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-29 01:01:14.274431 | orchestrator | 2025-05-29 01:01:14.274448 | orchestrator | TASK [keystone : Check if policies shall be overwritten] *********************** 2025-05-29 01:01:14.274480 | orchestrator | Thursday 29 May 2025 00:58:40 +0000 (0:00:02.283) 0:00:04.389 ********** 2025-05-29 01:01:14.274500 | orchestrator | ok: [testbed-node-0 -> localhost] => (item=/opt/configuration/environments/kolla/files/overlays/keystone/policy.yaml) 2025-05-29 01:01:14.274519 | orchestrator | 2025-05-29 01:01:14.274537 | orchestrator | TASK [keystone : Set keystone policy file] ************************************* 2025-05-29 01:01:14.274556 | orchestrator | Thursday 29 May 2025 00:58:40 +0000 (0:00:00.681) 0:00:05.071 ********** 2025-05-29 01:01:14.274574 | orchestrator | ok: [testbed-node-0] 2025-05-29 01:01:14.274593 | orchestrator | ok: [testbed-node-1] 2025-05-29 01:01:14.274611 | orchestrator | ok: [testbed-node-2] 2025-05-29 01:01:14.274629 | orchestrator | 2025-05-29 01:01:14.274647 | orchestrator | TASK [keystone : Check if Keystone domain-specific config is supplied] ********* 2025-05-29 01:01:14.274665 | orchestrator | Thursday 29 May 2025 00:58:41 +0000 (0:00:00.464) 0:00:05.535 ********** 2025-05-29 01:01:14.274682 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-29 01:01:14.274701 | orchestrator | 2025-05-29 01:01:14.274720 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-05-29 01:01:14.274754 | orchestrator | Thursday 29 May 2025 00:58:41 +0000 (0:00:00.418) 0:00:05.954 ********** 2025-05-29 01:01:14.274774 | orchestrator | included: /ansible/roles/keystone/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-29 01:01:14.274794 | orchestrator | 2025-05-29 01:01:14.274824 | orchestrator | TASK [service-cert-copy : keystone | Copying over extra CA certificates] ******* 2025-05-29 01:01:14.274844 | orchestrator | Thursday 29 May 2025 00:58:42 +0000 (0:00:00.639) 0:00:06.593 ********** 2025-05-29 01:01:14.274866 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-29 01:01:14.274890 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-29 01:01:14.274932 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-29 01:01:14.275144 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-29 01:01:14.275197 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-29 01:01:14.275221 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-29 01:01:14.275242 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-29 01:01:14.275261 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-29 01:01:14.275280 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-29 01:01:14.275296 | orchestrator | 2025-05-29 01:01:14.275307 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS certificate] *** 2025-05-29 01:01:14.275319 | orchestrator | Thursday 29 May 2025 00:58:45 +0000 (0:00:03.382) 0:00:09.976 ********** 2025-05-29 01:01:14.275347 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-05-29 01:01:14.275368 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-29 01:01:14.275380 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-29 01:01:14.275391 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:01:14.275403 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-05-29 01:01:14.275416 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-29 01:01:14.275436 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-29 01:01:14.275454 | orchestrator | skipping: [testbed-node-1] 2025-05-29 01:01:14.275470 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-05-29 01:01:14.275482 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-29 01:01:14.275494 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-29 01:01:14.275505 | orchestrator | skipping: [testbed-node-2] 2025-05-29 01:01:14.275516 | orchestrator | 2025-05-29 01:01:14.275528 | orchestrator | TASK [service-cert-copy : keystone | Copying over backend internal TLS key] **** 2025-05-29 01:01:14.275539 | orchestrator | Thursday 29 May 2025 00:58:46 +0000 (0:00:00.932) 0:00:10.908 ********** 2025-05-29 01:01:14.275551 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-05-29 01:01:14.275576 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-29 01:01:14.275599 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-29 01:01:14.275611 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:01:14.275623 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-05-29 01:01:14.275635 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-29 01:01:14.275646 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-29 01:01:14.275657 | orchestrator | skipping: [testbed-node-1] 2025-05-29 01:01:14.275678 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}})  2025-05-29 01:01:14.275701 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-29 01:01:14.275850 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}})  2025-05-29 01:01:14.275868 | orchestrator | skipping: [testbed-node-2] 2025-05-29 01:01:14.275879 | orchestrator | 2025-05-29 01:01:14.275890 | orchestrator | TASK [keystone : Copying over config.json files for services] ****************** 2025-05-29 01:01:14.275901 | orchestrator | Thursday 29 May 2025 00:58:47 +0000 (0:00:01.092) 0:00:12.000 ********** 2025-05-29 01:01:14.275913 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-29 01:01:14.275926 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-29 01:01:14.275955 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-29 01:01:14.275974 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-29 01:01:14.275985 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-29 01:01:14.275997 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-29 01:01:14.276008 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-29 01:01:14.276027 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-29 01:01:14.276045 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-29 01:01:14.276056 | orchestrator | 2025-05-29 01:01:14.276068 | orchestrator | TASK [keystone : Copying over keystone.conf] *********************************** 2025-05-29 01:01:14.276078 | orchestrator | Thursday 29 May 2025 00:58:50 +0000 (0:00:03.161) 0:00:15.162 ********** 2025-05-29 01:01:14.276161 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-29 01:01:14.276185 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-29 01:01:14.276207 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-29 01:01:14.276241 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-29 01:01:14.276275 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-29 01:01:14.276289 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-29 01:01:14.276301 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-29 01:01:14.276312 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-29 01:01:14.276330 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-29 01:01:14.276343 | orchestrator | 2025-05-29 01:01:14.276362 | orchestrator | TASK [keystone : Copying keystone-startup script for keystone] ***************** 2025-05-29 01:01:14.276380 | orchestrator | Thursday 29 May 2025 00:58:58 +0000 (0:00:07.173) 0:00:22.335 ********** 2025-05-29 01:01:14.276398 | orchestrator | changed: [testbed-node-2] 2025-05-29 01:01:14.276416 | orchestrator | changed: [testbed-node-0] 2025-05-29 01:01:14.276435 | orchestrator | changed: [testbed-node-1] 2025-05-29 01:01:14.276455 | orchestrator | 2025-05-29 01:01:14.276473 | orchestrator | TASK [keystone : Create Keystone domain-specific config directory] ************* 2025-05-29 01:01:14.276490 | orchestrator | Thursday 29 May 2025 00:59:00 +0000 (0:00:02.402) 0:00:24.738 ********** 2025-05-29 01:01:14.276503 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:01:14.276515 | orchestrator | skipping: [testbed-node-1] 2025-05-29 01:01:14.276527 | orchestrator | skipping: [testbed-node-2] 2025-05-29 01:01:14.276539 | orchestrator | 2025-05-29 01:01:14.276557 | orchestrator | TASK [keystone : Get file list in custom domains folder] *********************** 2025-05-29 01:01:14.276571 | orchestrator | Thursday 29 May 2025 00:59:01 +0000 (0:00:01.228) 0:00:25.966 ********** 2025-05-29 01:01:14.276588 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:01:14.276606 | orchestrator | skipping: [testbed-node-1] 2025-05-29 01:01:14.276624 | orchestrator | skipping: [testbed-node-2] 2025-05-29 01:01:14.276643 | orchestrator | 2025-05-29 01:01:14.276661 | orchestrator | TASK [keystone : Copying Keystone Domain specific settings] ******************** 2025-05-29 01:01:14.276676 | orchestrator | Thursday 29 May 2025 00:59:02 +0000 (0:00:00.408) 0:00:26.375 ********** 2025-05-29 01:01:14.276693 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:01:14.276710 | orchestrator | skipping: [testbed-node-1] 2025-05-29 01:01:14.276727 | orchestrator | skipping: [testbed-node-2] 2025-05-29 01:01:14.276742 | orchestrator | 2025-05-29 01:01:14.276759 | orchestrator | TASK [keystone : Copying over existing policy file] **************************** 2025-05-29 01:01:14.276776 | orchestrator | Thursday 29 May 2025 00:59:02 +0000 (0:00:00.352) 0:00:26.727 ********** 2025-05-29 01:01:14.276801 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-29 01:01:14.276821 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-29 01:01:14.276851 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-29 01:01:14.276868 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-29 01:01:14.276898 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-29 01:01:14.276923 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}})  2025-05-29 01:01:14.276942 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-29 01:01:14.276971 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-29 01:01:14.276989 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-29 01:01:14.277006 | orchestrator | 2025-05-29 01:01:14.277024 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-05-29 01:01:14.277040 | orchestrator | Thursday 29 May 2025 00:59:04 +0000 (0:00:02.502) 0:00:29.229 ********** 2025-05-29 01:01:14.277055 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:01:14.277065 | orchestrator | skipping: [testbed-node-1] 2025-05-29 01:01:14.277076 | orchestrator | skipping: [testbed-node-2] 2025-05-29 01:01:14.277119 | orchestrator | 2025-05-29 01:01:14.277136 | orchestrator | TASK [keystone : Copying over wsgi-keystone.conf] ****************************** 2025-05-29 01:01:14.277152 | orchestrator | Thursday 29 May 2025 00:59:05 +0000 (0:00:00.486) 0:00:29.716 ********** 2025-05-29 01:01:14.277168 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-05-29 01:01:14.277186 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-05-29 01:01:14.277213 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/keystone/templates/wsgi-keystone.conf.j2) 2025-05-29 01:01:14.277231 | orchestrator | 2025-05-29 01:01:14.277248 | orchestrator | TASK [keystone : Checking whether keystone-paste.ini file exists] ************** 2025-05-29 01:01:14.277265 | orchestrator | Thursday 29 May 2025 00:59:07 +0000 (0:00:02.224) 0:00:31.941 ********** 2025-05-29 01:01:14.277282 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-29 01:01:14.277300 | orchestrator | 2025-05-29 01:01:14.277318 | orchestrator | TASK [keystone : Copying over keystone-paste.ini] ****************************** 2025-05-29 01:01:14.277334 | orchestrator | Thursday 29 May 2025 00:59:08 +0000 (0:00:00.628) 0:00:32.569 ********** 2025-05-29 01:01:14.277352 | orchestrator | skipping: [testbed-node-2] 2025-05-29 01:01:14.277370 | orchestrator | skipping: [testbed-node-1] 2025-05-29 01:01:14.277387 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:01:14.277428 | orchestrator | 2025-05-29 01:01:14.277444 | orchestrator | TASK [keystone : Generate the required cron jobs for the node] ***************** 2025-05-29 01:01:14.277461 | orchestrator | Thursday 29 May 2025 00:59:09 +0000 (0:00:01.160) 0:00:33.730 ********** 2025-05-29 01:01:14.277478 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-29 01:01:14.277495 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-05-29 01:01:14.277512 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-05-29 01:01:14.277530 | orchestrator | 2025-05-29 01:01:14.277554 | orchestrator | TASK [keystone : Set fact with the generated cron jobs for building the crontab later] *** 2025-05-29 01:01:14.277591 | orchestrator | Thursday 29 May 2025 00:59:10 +0000 (0:00:01.205) 0:00:34.935 ********** 2025-05-29 01:01:14.277603 | orchestrator | ok: [testbed-node-0] 2025-05-29 01:01:14.277613 | orchestrator | ok: [testbed-node-1] 2025-05-29 01:01:14.277623 | orchestrator | ok: [testbed-node-2] 2025-05-29 01:01:14.277632 | orchestrator | 2025-05-29 01:01:14.277642 | orchestrator | TASK [keystone : Copying files for keystone-fernet] **************************** 2025-05-29 01:01:14.277652 | orchestrator | Thursday 29 May 2025 00:59:10 +0000 (0:00:00.330) 0:00:35.266 ********** 2025-05-29 01:01:14.277662 | orchestrator | changed: [testbed-node-0] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-05-29 01:01:14.277672 | orchestrator | changed: [testbed-node-1] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-05-29 01:01:14.277682 | orchestrator | changed: [testbed-node-2] => (item={'src': 'crontab.j2', 'dest': 'crontab'}) 2025-05-29 01:01:14.277692 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-05-29 01:01:14.277701 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-05-29 01:01:14.277711 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-rotate.sh.j2', 'dest': 'fernet-rotate.sh'}) 2025-05-29 01:01:14.277721 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-05-29 01:01:14.277732 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-05-29 01:01:14.277741 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-node-sync.sh.j2', 'dest': 'fernet-node-sync.sh'}) 2025-05-29 01:01:14.277751 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-05-29 01:01:14.277761 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-05-29 01:01:14.277771 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-push.sh.j2', 'dest': 'fernet-push.sh'}) 2025-05-29 01:01:14.277781 | orchestrator | changed: [testbed-node-0] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-05-29 01:01:14.277791 | orchestrator | changed: [testbed-node-1] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-05-29 01:01:14.277800 | orchestrator | changed: [testbed-node-2] => (item={'src': 'fernet-healthcheck.sh.j2', 'dest': 'fernet-healthcheck.sh'}) 2025-05-29 01:01:14.277810 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-05-29 01:01:14.277820 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-05-29 01:01:14.277833 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-05-29 01:01:14.277851 | orchestrator | changed: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-05-29 01:01:14.277868 | orchestrator | changed: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-05-29 01:01:14.277885 | orchestrator | changed: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-05-29 01:01:14.277903 | orchestrator | 2025-05-29 01:01:14.277921 | orchestrator | TASK [keystone : Copying files for keystone-ssh] ******************************* 2025-05-29 01:01:14.277938 | orchestrator | Thursday 29 May 2025 00:59:21 +0000 (0:00:10.610) 0:00:45.876 ********** 2025-05-29 01:01:14.277955 | orchestrator | changed: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-05-29 01:01:14.277970 | orchestrator | changed: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-05-29 01:01:14.277986 | orchestrator | changed: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-05-29 01:01:14.278004 | orchestrator | changed: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-05-29 01:01:14.278059 | orchestrator | changed: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-05-29 01:01:14.278111 | orchestrator | changed: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-05-29 01:01:14.278131 | orchestrator | 2025-05-29 01:01:14.278149 | orchestrator | TASK [keystone : Check keystone containers] ************************************ 2025-05-29 01:01:14.278166 | orchestrator | Thursday 29 May 2025 00:59:24 +0000 (0:00:03.235) 0:00:49.112 ********** 2025-05-29 01:01:14.278195 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-29 01:01:14.278217 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-29 01:01:14.278239 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone', 'value': {'container_name': 'keystone', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:5000'], 'timeout': '30'}, 'haproxy': {'keystone_internal': {'enabled': True, 'mode': 'http', 'external': False, 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}, 'keystone_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'tls_backend': 'no', 'port': '5000', 'listen_port': '5000', 'backend_http_extra': ['balance "roundrobin"']}}}}) 2025-05-29 01:01:14.278258 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-29 01:01:14.278299 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-29 01:01:14.278325 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-ssh', 'value': {'container_name': 'keystone_ssh', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-ssh:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8023'], 'timeout': '30'}}}) 2025-05-29 01:01:14.278336 | orchestrator | changed: [testbed-node-0] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-29 01:01:14.278346 | orchestrator | changed: [testbed-node-2] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-29 01:01:14.278356 | orchestrator | changed: [testbed-node-1] => (item={'key': 'keystone-fernet', 'value': {'container_name': 'keystone_fernet', 'group': 'keystone', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/keystone-fernet:25.0.1.20241206', 'volumes': ['/etc/kolla/keystone-fernet/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'keystone_fernet_tokens:/etc/keystone/fernet-keys'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', '/usr/bin/fernet-healthcheck.sh'], 'timeout': '30'}}}) 2025-05-29 01:01:14.278366 | orchestrator | 2025-05-29 01:01:14.278376 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-05-29 01:01:14.278386 | orchestrator | Thursday 29 May 2025 00:59:27 +0000 (0:00:02.909) 0:00:52.022 ********** 2025-05-29 01:01:14.278396 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:01:14.278406 | orchestrator | skipping: [testbed-node-1] 2025-05-29 01:01:14.278415 | orchestrator | skipping: [testbed-node-2] 2025-05-29 01:01:14.278425 | orchestrator | 2025-05-29 01:01:14.278434 | orchestrator | TASK [keystone : Creating keystone database] *********************************** 2025-05-29 01:01:14.278450 | orchestrator | Thursday 29 May 2025 00:59:28 +0000 (0:00:00.287) 0:00:52.309 ********** 2025-05-29 01:01:14.278460 | orchestrator | changed: [testbed-node-0] 2025-05-29 01:01:14.278469 | orchestrator | 2025-05-29 01:01:14.278479 | orchestrator | TASK [keystone : Creating Keystone database user and setting permissions] ****** 2025-05-29 01:01:14.278488 | orchestrator | Thursday 29 May 2025 00:59:30 +0000 (0:00:02.595) 0:00:54.905 ********** 2025-05-29 01:01:14.278498 | orchestrator | changed: [testbed-node-0] 2025-05-29 01:01:14.278507 | orchestrator | 2025-05-29 01:01:14.278517 | orchestrator | TASK [keystone : Checking for any running keystone_fernet containers] ********** 2025-05-29 01:01:14.278526 | orchestrator | Thursday 29 May 2025 00:59:33 +0000 (0:00:02.402) 0:00:57.308 ********** 2025-05-29 01:01:14.278536 | orchestrator | ok: [testbed-node-0] 2025-05-29 01:01:14.278545 | orchestrator | ok: [testbed-node-2] 2025-05-29 01:01:14.278555 | orchestrator | ok: [testbed-node-1] 2025-05-29 01:01:14.278564 | orchestrator | 2025-05-29 01:01:14.278574 | orchestrator | TASK [keystone : Group nodes where keystone_fernet is running] ***************** 2025-05-29 01:01:14.278583 | orchestrator | Thursday 29 May 2025 00:59:34 +0000 (0:00:01.029) 0:00:58.337 ********** 2025-05-29 01:01:14.278593 | orchestrator | ok: [testbed-node-0] 2025-05-29 01:01:14.278608 | orchestrator | ok: [testbed-node-1] 2025-05-29 01:01:14.278617 | orchestrator | ok: [testbed-node-2] 2025-05-29 01:01:14.278627 | orchestrator | 2025-05-29 01:01:14.278636 | orchestrator | TASK [keystone : Fail if any hosts need bootstrapping and not all hosts targeted] *** 2025-05-29 01:01:14.278646 | orchestrator | Thursday 29 May 2025 00:59:34 +0000 (0:00:00.355) 0:00:58.693 ********** 2025-05-29 01:01:14.278660 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:01:14.278677 | orchestrator | skipping: [testbed-node-1] 2025-05-29 01:01:14.278697 | orchestrator | skipping: [testbed-node-2] 2025-05-29 01:01:14.278713 | orchestrator | 2025-05-29 01:01:14.278731 | orchestrator | TASK [keystone : Running Keystone bootstrap container] ************************* 2025-05-29 01:01:14.278747 | orchestrator | Thursday 29 May 2025 00:59:34 +0000 (0:00:00.502) 0:00:59.196 ********** 2025-05-29 01:01:14.278764 | orchestrator | changed: [testbed-node-0] 2025-05-29 01:01:14.278780 | orchestrator | 2025-05-29 01:01:14.278797 | orchestrator | TASK [keystone : Running Keystone fernet bootstrap container] ****************** 2025-05-29 01:01:14.278814 | orchestrator | Thursday 29 May 2025 00:59:47 +0000 (0:00:12.902) 0:01:12.098 ********** 2025-05-29 01:01:14.278826 | orchestrator | changed: [testbed-node-0] 2025-05-29 01:01:14.278843 | orchestrator | 2025-05-29 01:01:14.278860 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-05-29 01:01:14.278877 | orchestrator | Thursday 29 May 2025 00:59:57 +0000 (0:00:09.212) 0:01:21.310 ********** 2025-05-29 01:01:14.278894 | orchestrator | 2025-05-29 01:01:14.278913 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-05-29 01:01:14.278931 | orchestrator | Thursday 29 May 2025 00:59:57 +0000 (0:00:00.052) 0:01:21.363 ********** 2025-05-29 01:01:14.278948 | orchestrator | 2025-05-29 01:01:14.278965 | orchestrator | TASK [keystone : Flush handlers] *********************************************** 2025-05-29 01:01:14.278980 | orchestrator | Thursday 29 May 2025 00:59:57 +0000 (0:00:00.052) 0:01:21.415 ********** 2025-05-29 01:01:14.278997 | orchestrator | 2025-05-29 01:01:14.279007 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-ssh container] ******************** 2025-05-29 01:01:14.279020 | orchestrator | Thursday 29 May 2025 00:59:57 +0000 (0:00:00.055) 0:01:21.471 ********** 2025-05-29 01:01:14.279037 | orchestrator | changed: [testbed-node-0] 2025-05-29 01:01:14.279053 | orchestrator | changed: [testbed-node-1] 2025-05-29 01:01:14.279071 | orchestrator | changed: [testbed-node-2] 2025-05-29 01:01:14.279157 | orchestrator | 2025-05-29 01:01:14.279174 | orchestrator | RUNNING HANDLER [keystone : Restart keystone-fernet container] ***************** 2025-05-29 01:01:14.279191 | orchestrator | Thursday 29 May 2025 01:00:12 +0000 (0:00:14.961) 0:01:36.432 ********** 2025-05-29 01:01:14.279206 | orchestrator | changed: [testbed-node-0] 2025-05-29 01:01:14.279222 | orchestrator | changed: [testbed-node-1] 2025-05-29 01:01:14.279236 | orchestrator | changed: [testbed-node-2] 2025-05-29 01:01:14.279251 | orchestrator | 2025-05-29 01:01:14.279276 | orchestrator | RUNNING HANDLER [keystone : Restart keystone container] ************************ 2025-05-29 01:01:14.279292 | orchestrator | Thursday 29 May 2025 01:00:22 +0000 (0:00:09.893) 0:01:46.326 ********** 2025-05-29 01:01:14.279307 | orchestrator | changed: [testbed-node-0] 2025-05-29 01:01:14.279323 | orchestrator | changed: [testbed-node-2] 2025-05-29 01:01:14.279338 | orchestrator | changed: [testbed-node-1] 2025-05-29 01:01:14.279357 | orchestrator | 2025-05-29 01:01:14.279372 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-05-29 01:01:14.279390 | orchestrator | Thursday 29 May 2025 01:00:27 +0000 (0:00:05.710) 0:01:52.037 ********** 2025-05-29 01:01:14.279406 | orchestrator | included: /ansible/roles/keystone/tasks/distribute_fernet.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-29 01:01:14.279423 | orchestrator | 2025-05-29 01:01:14.279442 | orchestrator | TASK [keystone : Waiting for Keystone SSH port to be UP] *********************** 2025-05-29 01:01:14.279452 | orchestrator | Thursday 29 May 2025 01:00:28 +0000 (0:00:00.743) 0:01:52.780 ********** 2025-05-29 01:01:14.279462 | orchestrator | ok: [testbed-node-1] 2025-05-29 01:01:14.279472 | orchestrator | ok: [testbed-node-0] 2025-05-29 01:01:14.279481 | orchestrator | ok: [testbed-node-2] 2025-05-29 01:01:14.279491 | orchestrator | 2025-05-29 01:01:14.279500 | orchestrator | TASK [keystone : Run key distribution] ***************************************** 2025-05-29 01:01:14.279510 | orchestrator | Thursday 29 May 2025 01:00:29 +0000 (0:00:01.044) 0:01:53.824 ********** 2025-05-29 01:01:14.279519 | orchestrator | changed: [testbed-node-0] 2025-05-29 01:01:14.279529 | orchestrator | 2025-05-29 01:01:14.279539 | orchestrator | TASK [keystone : Creating admin project, user, role, service, and endpoint] **** 2025-05-29 01:01:14.279548 | orchestrator | Thursday 29 May 2025 01:00:31 +0000 (0:00:01.473) 0:01:55.298 ********** 2025-05-29 01:01:14.279558 | orchestrator | changed: [testbed-node-0] => (item=RegionOne) 2025-05-29 01:01:14.279568 | orchestrator | 2025-05-29 01:01:14.279577 | orchestrator | TASK [service-ks-register : keystone | Creating services] ********************** 2025-05-29 01:01:14.279587 | orchestrator | Thursday 29 May 2025 01:00:40 +0000 (0:00:09.478) 0:02:04.777 ********** 2025-05-29 01:01:14.279594 | orchestrator | changed: [testbed-node-0] => (item=keystone (identity)) 2025-05-29 01:01:14.279602 | orchestrator | 2025-05-29 01:01:14.279610 | orchestrator | TASK [service-ks-register : keystone | Creating endpoints] ********************* 2025-05-29 01:01:14.279618 | orchestrator | Thursday 29 May 2025 01:00:59 +0000 (0:00:19.450) 0:02:24.227 ********** 2025-05-29 01:01:14.279626 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api-int.testbed.osism.xyz:5000 -> internal) 2025-05-29 01:01:14.279634 | orchestrator | ok: [testbed-node-0] => (item=keystone -> https://api.testbed.osism.xyz:5000 -> public) 2025-05-29 01:01:14.279642 | orchestrator | 2025-05-29 01:01:14.279649 | orchestrator | TASK [service-ks-register : keystone | Creating projects] ********************** 2025-05-29 01:01:14.279657 | orchestrator | Thursday 29 May 2025 01:01:06 +0000 (0:00:06.844) 0:02:31.072 ********** 2025-05-29 01:01:14.279665 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:01:14.279673 | orchestrator | 2025-05-29 01:01:14.279681 | orchestrator | TASK [service-ks-register : keystone | Creating users] ************************* 2025-05-29 01:01:14.279688 | orchestrator | Thursday 29 May 2025 01:01:06 +0000 (0:00:00.119) 0:02:31.191 ********** 2025-05-29 01:01:14.279696 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:01:14.279704 | orchestrator | 2025-05-29 01:01:14.279712 | orchestrator | TASK [service-ks-register : keystone | Creating roles] ************************* 2025-05-29 01:01:14.279728 | orchestrator | Thursday 29 May 2025 01:01:07 +0000 (0:00:00.106) 0:02:31.298 ********** 2025-05-29 01:01:14.279737 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:01:14.279745 | orchestrator | 2025-05-29 01:01:14.279752 | orchestrator | TASK [service-ks-register : keystone | Granting user roles] ******************** 2025-05-29 01:01:14.279760 | orchestrator | Thursday 29 May 2025 01:01:07 +0000 (0:00:00.132) 0:02:31.431 ********** 2025-05-29 01:01:14.279768 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:01:14.279776 | orchestrator | 2025-05-29 01:01:14.279784 | orchestrator | TASK [keystone : Creating default user role] *********************************** 2025-05-29 01:01:14.279798 | orchestrator | Thursday 29 May 2025 01:01:07 +0000 (0:00:00.398) 0:02:31.829 ********** 2025-05-29 01:01:14.279806 | orchestrator | ok: [testbed-node-0] 2025-05-29 01:01:14.279814 | orchestrator | 2025-05-29 01:01:14.279822 | orchestrator | TASK [keystone : include_tasks] ************************************************ 2025-05-29 01:01:14.279830 | orchestrator | Thursday 29 May 2025 01:01:10 +0000 (0:00:03.219) 0:02:35.049 ********** 2025-05-29 01:01:14.279838 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:01:14.279846 | orchestrator | skipping: [testbed-node-1] 2025-05-29 01:01:14.279854 | orchestrator | skipping: [testbed-node-2] 2025-05-29 01:01:14.279862 | orchestrator | 2025-05-29 01:01:14.279869 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-29 01:01:14.279883 | orchestrator | testbed-node-0 : ok=36  changed=20  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2025-05-29 01:01:14.279892 | orchestrator | testbed-node-1 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-05-29 01:01:14.279900 | orchestrator | testbed-node-2 : ok=24  changed=13  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-05-29 01:01:14.279908 | orchestrator | 2025-05-29 01:01:14.279916 | orchestrator | 2025-05-29 01:01:14.279924 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-29 01:01:14.279932 | orchestrator | Thursday 29 May 2025 01:01:11 +0000 (0:00:00.555) 0:02:35.604 ********** 2025-05-29 01:01:14.279940 | orchestrator | =============================================================================== 2025-05-29 01:01:14.279948 | orchestrator | service-ks-register : keystone | Creating services --------------------- 19.45s 2025-05-29 01:01:14.279956 | orchestrator | keystone : Restart keystone-ssh container ------------------------------ 14.96s 2025-05-29 01:01:14.279964 | orchestrator | keystone : Running Keystone bootstrap container ------------------------ 12.90s 2025-05-29 01:01:14.279971 | orchestrator | keystone : Copying files for keystone-fernet --------------------------- 10.61s 2025-05-29 01:01:14.279979 | orchestrator | keystone : Restart keystone-fernet container ---------------------------- 9.89s 2025-05-29 01:01:14.279987 | orchestrator | keystone : Creating admin project, user, role, service, and endpoint ---- 9.48s 2025-05-29 01:01:14.279995 | orchestrator | keystone : Running Keystone fernet bootstrap container ------------------ 9.21s 2025-05-29 01:01:14.280003 | orchestrator | keystone : Copying over keystone.conf ----------------------------------- 7.17s 2025-05-29 01:01:14.280011 | orchestrator | service-ks-register : keystone | Creating endpoints --------------------- 6.84s 2025-05-29 01:01:14.280019 | orchestrator | keystone : Restart keystone container ----------------------------------- 5.71s 2025-05-29 01:01:14.280026 | orchestrator | service-cert-copy : keystone | Copying over extra CA certificates ------- 3.38s 2025-05-29 01:01:14.280034 | orchestrator | keystone : Copying files for keystone-ssh ------------------------------- 3.24s 2025-05-29 01:01:14.280042 | orchestrator | keystone : Creating default user role ----------------------------------- 3.22s 2025-05-29 01:01:14.280050 | orchestrator | keystone : Copying over config.json files for services ------------------ 3.16s 2025-05-29 01:01:14.280058 | orchestrator | keystone : Check keystone containers ------------------------------------ 2.91s 2025-05-29 01:01:14.280066 | orchestrator | keystone : Creating keystone database ----------------------------------- 2.60s 2025-05-29 01:01:14.280074 | orchestrator | keystone : Copying over existing policy file ---------------------------- 2.50s 2025-05-29 01:01:14.280081 | orchestrator | keystone : Creating Keystone database user and setting permissions ------ 2.40s 2025-05-29 01:01:14.280113 | orchestrator | keystone : Copying keystone-startup script for keystone ----------------- 2.40s 2025-05-29 01:01:14.280122 | orchestrator | keystone : Ensuring config directories exist ---------------------------- 2.28s 2025-05-29 01:01:14.280130 | orchestrator | 2025-05-29 01:01:14 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:01:14.280143 | orchestrator | 2025-05-29 01:01:14 | INFO  | Task 33e237ae-9122-4c0f-bab8-04edc8d678d8 is in state STARTED 2025-05-29 01:01:14.280152 | orchestrator | 2025-05-29 01:01:14 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:01:17.333202 | orchestrator | 2025-05-29 01:01:17 | INFO  | Task e577f5df-e622-48fb-812d-b74ae9042bd6 is in state STARTED 2025-05-29 01:01:17.333379 | orchestrator | 2025-05-29 01:01:17 | INFO  | Task dc61d1c4-d63c-4661-bb18-14b1db4b37b8 is in state STARTED 2025-05-29 01:01:17.333412 | orchestrator | 2025-05-29 01:01:17 | INFO  | Task 880a1740-15c7-4628-8023-2046f78b4e3a is in state SUCCESS 2025-05-29 01:01:17.334260 | orchestrator | 2025-05-29 01:01:17.334294 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.12 2025-05-29 01:01:17.334307 | orchestrator | 2025-05-29 01:01:17.334318 | orchestrator | PLAY [Apply role fetch-keys] *************************************************** 2025-05-29 01:01:17.334329 | orchestrator | 2025-05-29 01:01:17.334340 | orchestrator | TASK [ceph-facts : include_tasks convert_grafana_server_group_name.yml] ******** 2025-05-29 01:01:17.334352 | orchestrator | Thursday 29 May 2025 01:00:47 +0000 (0:00:00.460) 0:00:00.461 ********** 2025-05-29 01:01:17.334363 | orchestrator | included: /ansible/roles/ceph-facts/tasks/convert_grafana_server_group_name.yml for testbed-node-0 2025-05-29 01:01:17.334375 | orchestrator | 2025-05-29 01:01:17.334385 | orchestrator | TASK [ceph-facts : convert grafana-server group name if exist] ***************** 2025-05-29 01:01:17.334396 | orchestrator | Thursday 29 May 2025 01:00:47 +0000 (0:00:00.209) 0:00:00.670 ********** 2025-05-29 01:01:17.334408 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-0) 2025-05-29 01:01:17.334419 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-1) 2025-05-29 01:01:17.334430 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-2) 2025-05-29 01:01:17.334441 | orchestrator | 2025-05-29 01:01:17.334451 | orchestrator | TASK [ceph-facts : include facts.yml] ****************************************** 2025-05-29 01:01:17.334462 | orchestrator | Thursday 29 May 2025 01:00:48 +0000 (0:00:00.863) 0:00:01.533 ********** 2025-05-29 01:01:17.334473 | orchestrator | included: /ansible/roles/ceph-facts/tasks/facts.yml for testbed-node-0 2025-05-29 01:01:17.334484 | orchestrator | 2025-05-29 01:01:17.334494 | orchestrator | TASK [ceph-facts : check if it is atomic host] ********************************* 2025-05-29 01:01:17.334521 | orchestrator | Thursday 29 May 2025 01:00:48 +0000 (0:00:00.229) 0:00:01.762 ********** 2025-05-29 01:01:17.334533 | orchestrator | ok: [testbed-node-0] 2025-05-29 01:01:17.334544 | orchestrator | 2025-05-29 01:01:17.334555 | orchestrator | TASK [ceph-facts : set_fact is_atomic] ***************************************** 2025-05-29 01:01:17.334565 | orchestrator | Thursday 29 May 2025 01:00:49 +0000 (0:00:00.587) 0:00:02.349 ********** 2025-05-29 01:01:17.334576 | orchestrator | ok: [testbed-node-0] 2025-05-29 01:01:17.334587 | orchestrator | 2025-05-29 01:01:17.334598 | orchestrator | TASK [ceph-facts : check if podman binary is present] ************************** 2025-05-29 01:01:17.334608 | orchestrator | Thursday 29 May 2025 01:00:49 +0000 (0:00:00.126) 0:00:02.476 ********** 2025-05-29 01:01:17.334619 | orchestrator | ok: [testbed-node-0] 2025-05-29 01:01:17.334630 | orchestrator | 2025-05-29 01:01:17.334641 | orchestrator | TASK [ceph-facts : set_fact container_binary] ********************************** 2025-05-29 01:01:17.334652 | orchestrator | Thursday 29 May 2025 01:00:49 +0000 (0:00:00.427) 0:00:02.904 ********** 2025-05-29 01:01:17.334662 | orchestrator | ok: [testbed-node-0] 2025-05-29 01:01:17.334673 | orchestrator | 2025-05-29 01:01:17.334684 | orchestrator | TASK [ceph-facts : set_fact ceph_cmd] ****************************************** 2025-05-29 01:01:17.334695 | orchestrator | Thursday 29 May 2025 01:00:49 +0000 (0:00:00.144) 0:00:03.049 ********** 2025-05-29 01:01:17.334706 | orchestrator | ok: [testbed-node-0] 2025-05-29 01:01:17.334716 | orchestrator | 2025-05-29 01:01:17.334727 | orchestrator | TASK [ceph-facts : set_fact discovered_interpreter_python] ********************* 2025-05-29 01:01:17.334738 | orchestrator | Thursday 29 May 2025 01:00:49 +0000 (0:00:00.137) 0:00:03.186 ********** 2025-05-29 01:01:17.334772 | orchestrator | ok: [testbed-node-0] 2025-05-29 01:01:17.334783 | orchestrator | 2025-05-29 01:01:17.334794 | orchestrator | TASK [ceph-facts : set_fact discovered_interpreter_python if not previously set] *** 2025-05-29 01:01:17.334805 | orchestrator | Thursday 29 May 2025 01:00:50 +0000 (0:00:00.161) 0:00:03.347 ********** 2025-05-29 01:01:17.334815 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:01:17.334827 | orchestrator | 2025-05-29 01:01:17.334838 | orchestrator | TASK [ceph-facts : set_fact ceph_release ceph_stable_release] ****************** 2025-05-29 01:01:17.334849 | orchestrator | Thursday 29 May 2025 01:00:50 +0000 (0:00:00.129) 0:00:03.476 ********** 2025-05-29 01:01:17.334860 | orchestrator | ok: [testbed-node-0] 2025-05-29 01:01:17.334871 | orchestrator | 2025-05-29 01:01:17.334884 | orchestrator | TASK [ceph-facts : set_fact monitor_name ansible_facts['hostname']] ************ 2025-05-29 01:01:17.334898 | orchestrator | Thursday 29 May 2025 01:00:50 +0000 (0:00:00.320) 0:00:03.797 ********** 2025-05-29 01:01:17.334911 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-29 01:01:17.334924 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-29 01:01:17.334938 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-29 01:01:17.334950 | orchestrator | 2025-05-29 01:01:17.334964 | orchestrator | TASK [ceph-facts : set_fact container_exec_cmd] ******************************** 2025-05-29 01:01:17.334976 | orchestrator | Thursday 29 May 2025 01:00:51 +0000 (0:00:00.697) 0:00:04.494 ********** 2025-05-29 01:01:17.334989 | orchestrator | ok: [testbed-node-0] 2025-05-29 01:01:17.335002 | orchestrator | 2025-05-29 01:01:17.335016 | orchestrator | TASK [ceph-facts : find a running mon container] ******************************* 2025-05-29 01:01:17.335028 | orchestrator | Thursday 29 May 2025 01:00:51 +0000 (0:00:00.257) 0:00:04.752 ********** 2025-05-29 01:01:17.335041 | orchestrator | changed: [testbed-node-0] => (item=testbed-node-0) 2025-05-29 01:01:17.335054 | orchestrator | changed: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-29 01:01:17.335067 | orchestrator | changed: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-29 01:01:17.335080 | orchestrator | 2025-05-29 01:01:17.335117 | orchestrator | TASK [ceph-facts : check for a ceph mon socket] ******************************** 2025-05-29 01:01:17.335130 | orchestrator | Thursday 29 May 2025 01:00:53 +0000 (0:00:01.939) 0:00:06.691 ********** 2025-05-29 01:01:17.335143 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-29 01:01:17.335156 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-29 01:01:17.335170 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-29 01:01:17.335183 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:01:17.335196 | orchestrator | 2025-05-29 01:01:17.335210 | orchestrator | TASK [ceph-facts : check if the ceph mon socket is in-use] ********************* 2025-05-29 01:01:17.335236 | orchestrator | Thursday 29 May 2025 01:00:53 +0000 (0:00:00.421) 0:00:07.112 ********** 2025-05-29 01:01:17.335250 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'})  2025-05-29 01:01:17.335265 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'})  2025-05-29 01:01:17.335277 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'})  2025-05-29 01:01:17.335289 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:01:17.335301 | orchestrator | 2025-05-29 01:01:17.335312 | orchestrator | TASK [ceph-facts : set_fact running_mon - non_container] *********************** 2025-05-29 01:01:17.335324 | orchestrator | Thursday 29 May 2025 01:00:54 +0000 (0:00:00.829) 0:00:07.941 ********** 2025-05-29 01:01:17.335352 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-05-29 01:01:17.335367 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-05-29 01:01:17.335379 | orchestrator | skipping: [testbed-node-0] => (item={'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': {'changed': False, 'skipped': True, 'skip_reason': 'Conditional result was False', 'false_condition': 'not containerized_deployment | bool', 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}, 'ansible_loop_var': 'item'})  2025-05-29 01:01:17.335391 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:01:17.335402 | orchestrator | 2025-05-29 01:01:17.335414 | orchestrator | TASK [ceph-facts : set_fact running_mon - container] *************************** 2025-05-29 01:01:17.335425 | orchestrator | Thursday 29 May 2025 01:00:54 +0000 (0:00:00.168) 0:00:08.110 ********** 2025-05-29 01:01:17.335439 | orchestrator | ok: [testbed-node-0] => (item={'changed': True, 'stdout': '27cac2c63622', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-0'], 'start': '2025-05-29 01:00:52.173849', 'end': '2025-05-29 01:00:52.233472', 'delta': '0:00:00.059623', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-0', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['27cac2c63622'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-0', 'ansible_loop_var': 'item'}) 2025-05-29 01:01:17.335455 | orchestrator | ok: [testbed-node-0] => (item={'changed': True, 'stdout': '3f53557b52db', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-1'], 'start': '2025-05-29 01:00:52.732480', 'end': '2025-05-29 01:00:52.777403', 'delta': '0:00:00.044923', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-1', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['3f53557b52db'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-1', 'ansible_loop_var': 'item'}) 2025-05-29 01:01:17.335476 | orchestrator | ok: [testbed-node-0] => (item={'changed': True, 'stdout': '06a206522b4c', 'stderr': '', 'rc': 0, 'cmd': ['docker', 'ps', '-q', '--filter', 'name=ceph-mon-testbed-node-2'], 'start': '2025-05-29 01:00:53.262028', 'end': '2025-05-29 01:00:53.303187', 'delta': '0:00:00.041159', 'msg': '', 'invocation': {'module_args': {'_raw_params': 'docker ps -q --filter name=ceph-mon-testbed-node-2', '_uses_shell': False, 'stdin_add_newline': True, 'strip_empty_ends': True, 'argv': None, 'chdir': None, 'executable': None, 'creates': None, 'removes': None, 'stdin': None}}, 'stdout_lines': ['06a206522b4c'], 'stderr_lines': [], 'failed': False, 'failed_when_result': False, 'item': 'testbed-node-2', 'ansible_loop_var': 'item'}) 2025-05-29 01:01:17.335489 | orchestrator | 2025-05-29 01:01:17.335501 | orchestrator | TASK [ceph-facts : set_fact _container_exec_cmd] ******************************* 2025-05-29 01:01:17.335520 | orchestrator | Thursday 29 May 2025 01:00:55 +0000 (0:00:00.242) 0:00:08.352 ********** 2025-05-29 01:01:17.335532 | orchestrator | ok: [testbed-node-0] 2025-05-29 01:01:17.335544 | orchestrator | 2025-05-29 01:01:17.335555 | orchestrator | TASK [ceph-facts : get current fsid if cluster is already running] ************* 2025-05-29 01:01:17.335567 | orchestrator | Thursday 29 May 2025 01:00:55 +0000 (0:00:00.259) 0:00:08.612 ********** 2025-05-29 01:01:17.335579 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] 2025-05-29 01:01:17.335591 | orchestrator | 2025-05-29 01:01:17.335602 | orchestrator | TASK [ceph-facts : set_fact current_fsid rc 1] ********************************* 2025-05-29 01:01:17.335619 | orchestrator | Thursday 29 May 2025 01:00:57 +0000 (0:00:01.655) 0:00:10.267 ********** 2025-05-29 01:01:17.335631 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:01:17.335642 | orchestrator | 2025-05-29 01:01:17.335653 | orchestrator | TASK [ceph-facts : get current fsid] ******************************************* 2025-05-29 01:01:17.335665 | orchestrator | Thursday 29 May 2025 01:00:57 +0000 (0:00:00.121) 0:00:10.389 ********** 2025-05-29 01:01:17.335676 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:01:17.335687 | orchestrator | 2025-05-29 01:01:17.335699 | orchestrator | TASK [ceph-facts : set_fact fsid] ********************************************** 2025-05-29 01:01:17.335710 | orchestrator | Thursday 29 May 2025 01:00:57 +0000 (0:00:00.223) 0:00:10.613 ********** 2025-05-29 01:01:17.335721 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:01:17.335733 | orchestrator | 2025-05-29 01:01:17.335744 | orchestrator | TASK [ceph-facts : set_fact fsid from current_fsid] **************************** 2025-05-29 01:01:17.335755 | orchestrator | Thursday 29 May 2025 01:00:57 +0000 (0:00:00.123) 0:00:10.736 ********** 2025-05-29 01:01:17.335766 | orchestrator | ok: [testbed-node-0] 2025-05-29 01:01:17.335778 | orchestrator | 2025-05-29 01:01:17.335789 | orchestrator | TASK [ceph-facts : generate cluster fsid] ************************************** 2025-05-29 01:01:17.335800 | orchestrator | Thursday 29 May 2025 01:00:57 +0000 (0:00:00.132) 0:00:10.869 ********** 2025-05-29 01:01:17.335812 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:01:17.335823 | orchestrator | 2025-05-29 01:01:17.335834 | orchestrator | TASK [ceph-facts : set_fact fsid] ********************************************** 2025-05-29 01:01:17.335846 | orchestrator | Thursday 29 May 2025 01:00:57 +0000 (0:00:00.205) 0:00:11.075 ********** 2025-05-29 01:01:17.335857 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:01:17.335868 | orchestrator | 2025-05-29 01:01:17.335879 | orchestrator | TASK [ceph-facts : resolve device link(s)] ************************************* 2025-05-29 01:01:17.335891 | orchestrator | Thursday 29 May 2025 01:00:57 +0000 (0:00:00.132) 0:00:11.207 ********** 2025-05-29 01:01:17.335902 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:01:17.335913 | orchestrator | 2025-05-29 01:01:17.335925 | orchestrator | TASK [ceph-facts : set_fact build devices from resolved symlinks] ************** 2025-05-29 01:01:17.335936 | orchestrator | Thursday 29 May 2025 01:00:58 +0000 (0:00:00.120) 0:00:11.328 ********** 2025-05-29 01:01:17.335947 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:01:17.335959 | orchestrator | 2025-05-29 01:01:17.335970 | orchestrator | TASK [ceph-facts : resolve dedicated_device link(s)] *************************** 2025-05-29 01:01:17.335982 | orchestrator | Thursday 29 May 2025 01:00:58 +0000 (0:00:00.130) 0:00:11.458 ********** 2025-05-29 01:01:17.335993 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:01:17.336004 | orchestrator | 2025-05-29 01:01:17.336015 | orchestrator | TASK [ceph-facts : set_fact build dedicated_devices from resolved symlinks] **** 2025-05-29 01:01:17.336027 | orchestrator | Thursday 29 May 2025 01:00:58 +0000 (0:00:00.137) 0:00:11.596 ********** 2025-05-29 01:01:17.336038 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:01:17.336049 | orchestrator | 2025-05-29 01:01:17.336061 | orchestrator | TASK [ceph-facts : resolve bluestore_wal_device link(s)] *********************** 2025-05-29 01:01:17.336072 | orchestrator | Thursday 29 May 2025 01:00:58 +0000 (0:00:00.300) 0:00:11.896 ********** 2025-05-29 01:01:17.336104 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:01:17.336116 | orchestrator | 2025-05-29 01:01:17.336134 | orchestrator | TASK [ceph-facts : set_fact build bluestore_wal_devices from resolved symlinks] *** 2025-05-29 01:01:17.336145 | orchestrator | Thursday 29 May 2025 01:00:58 +0000 (0:00:00.141) 0:00:12.038 ********** 2025-05-29 01:01:17.336156 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:01:17.336168 | orchestrator | 2025-05-29 01:01:17.336179 | orchestrator | TASK [ceph-facts : set_fact devices generate device list when osd_auto_discovery] *** 2025-05-29 01:01:17.336190 | orchestrator | Thursday 29 May 2025 01:00:58 +0000 (0:00:00.127) 0:00:12.165 ********** 2025-05-29 01:01:17.336202 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop0', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-29 01:01:17.336222 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop1', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-29 01:01:17.336235 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop2', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-29 01:01:17.336248 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop3', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-29 01:01:17.336377 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop4', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-29 01:01:17.336403 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop5', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-29 01:01:17.336414 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop6', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-29 01:01:17.336425 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'loop7', 'value': {'holders': [], 'host': '', 'links': {'ids': [], 'labels': [], 'masters': [], 'uuids': []}, 'model': None, 'partitions': {}, 'removable': '0', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 0, 'sectorsize': '512', 'size': '0.00 Bytes', 'support_discard': '0', 'vendor': None, 'virtual': 1}})  2025-05-29 01:01:17.336461 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sda', 'value': {'holders': [], 'host': 'SCSI storage controller: Red Hat, Inc. Virtio SCSI', 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_597f54b7-c847-4d10-a166-56462537237d', 'scsi-SQEMU_QEMU_HARDDISK_597f54b7-c847-4d10-a166-56462537237d'], 'labels': [], 'masters': [], 'uuids': []}, 'model': 'QEMU HARDDISK', 'partitions': {'sda1': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_597f54b7-c847-4d10-a166-56462537237d-part1', 'scsi-SQEMU_QEMU_HARDDISK_597f54b7-c847-4d10-a166-56462537237d-part1'], 'labels': ['cloudimg-rootfs'], 'masters': [], 'uuids': ['372462ea-137d-4e94-9465-a2fbb2a7f4ee']}, 'sectors': 165672927, 'sectorsize': 512, 'size': '79.00 GB', 'start': '2099200', 'uuid': '372462ea-137d-4e94-9465-a2fbb2a7f4ee'}, 'sda14': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_597f54b7-c847-4d10-a166-56462537237d-part14', 'scsi-SQEMU_QEMU_HARDDISK_597f54b7-c847-4d10-a166-56462537237d-part14'], 'labels': [], 'masters': [], 'uuids': []}, 'sectors': 8192, 'sectorsize': 512, 'size': '4.00 MB', 'start': '2048', 'uuid': None}, 'sda15': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_597f54b7-c847-4d10-a166-56462537237d-part15', 'scsi-SQEMU_QEMU_HARDDISK_597f54b7-c847-4d10-a166-56462537237d-part15'], 'labels': ['UEFI'], 'masters': [], 'uuids': ['A4F8-12D8']}, 'sectors': 217088, 'sectorsize': 512, 'size': '106.00 MB', 'start': '10240', 'uuid': 'A4F8-12D8'}, 'sda16': {'holders': [], 'links': {'ids': ['scsi-0QEMU_QEMU_HARDDISK_597f54b7-c847-4d10-a166-56462537237d-part16', 'scsi-SQEMU_QEMU_HARDDISK_597f54b7-c847-4d10-a166-56462537237d-part16'], 'labels': ['BOOT'], 'masters': [], 'uuids': ['0de9fa52-b0fa-4de2-9fd3-df23fb104826']}, 'sectors': 1869825, 'sectorsize': 512, 'size': '913.00 MB', 'start': '227328', 'uuid': '0de9fa52-b0fa-4de2-9fd3-df23fb104826'}}, 'removable': '0', 'rotational': '1', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'none', 'sectors': 167772160, 'sectorsize': '512', 'size': '80.00 GB', 'support_discard': '4096', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-29 01:01:17.336482 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'sr0', 'value': {'holders': [], 'host': 'IDE interface: Intel Corporation 82371SB PIIX3 IDE [Natoma/Triton II]', 'links': {'ids': ['ata-QEMU_DVD-ROM_QM00001'], 'labels': ['config-2'], 'masters': [], 'uuids': ['2025-05-29-00-02-00-00']}, 'model': 'QEMU DVD-ROM', 'partitions': {}, 'removable': '1', 'rotational': '0', 'sas_address': None, 'sas_device_handle': None, 'scheduler_mode': 'mq-deadline', 'sectors': 253, 'sectorsize': '2048', 'size': '506.00 KB', 'support_discard': '0', 'vendor': 'QEMU', 'virtual': 1}})  2025-05-29 01:01:17.336496 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:01:17.336507 | orchestrator | 2025-05-29 01:01:17.336519 | orchestrator | TASK [ceph-facts : get ceph current status] ************************************ 2025-05-29 01:01:17.336530 | orchestrator | Thursday 29 May 2025 01:00:59 +0000 (0:00:00.241) 0:00:12.406 ********** 2025-05-29 01:01:17.336541 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:01:17.336552 | orchestrator | 2025-05-29 01:01:17.336563 | orchestrator | TASK [ceph-facts : set_fact ceph_current_status] ******************************* 2025-05-29 01:01:17.336574 | orchestrator | Thursday 29 May 2025 01:00:59 +0000 (0:00:00.248) 0:00:12.655 ********** 2025-05-29 01:01:17.336585 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:01:17.336596 | orchestrator | 2025-05-29 01:01:17.336607 | orchestrator | TASK [ceph-facts : set_fact rgw_hostname] ************************************** 2025-05-29 01:01:17.336618 | orchestrator | Thursday 29 May 2025 01:00:59 +0000 (0:00:00.129) 0:00:12.785 ********** 2025-05-29 01:01:17.336629 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:01:17.336639 | orchestrator | 2025-05-29 01:01:17.336650 | orchestrator | TASK [ceph-facts : check if the ceph conf exists] ****************************** 2025-05-29 01:01:17.336661 | orchestrator | Thursday 29 May 2025 01:00:59 +0000 (0:00:00.133) 0:00:12.919 ********** 2025-05-29 01:01:17.336672 | orchestrator | ok: [testbed-node-0] 2025-05-29 01:01:17.336693 | orchestrator | 2025-05-29 01:01:17.336704 | orchestrator | TASK [ceph-facts : set default osd_pool_default_crush_rule fact] *************** 2025-05-29 01:01:17.336714 | orchestrator | Thursday 29 May 2025 01:01:00 +0000 (0:00:00.469) 0:00:13.388 ********** 2025-05-29 01:01:17.336824 | orchestrator | ok: [testbed-node-0] 2025-05-29 01:01:17.336840 | orchestrator | 2025-05-29 01:01:17.336851 | orchestrator | TASK [ceph-facts : read osd pool default crush rule] *************************** 2025-05-29 01:01:17.336862 | orchestrator | Thursday 29 May 2025 01:01:00 +0000 (0:00:00.136) 0:00:13.524 ********** 2025-05-29 01:01:17.336873 | orchestrator | ok: [testbed-node-0] 2025-05-29 01:01:17.336884 | orchestrator | 2025-05-29 01:01:17.336895 | orchestrator | TASK [ceph-facts : set osd_pool_default_crush_rule fact] *********************** 2025-05-29 01:01:17.336906 | orchestrator | Thursday 29 May 2025 01:01:00 +0000 (0:00:00.491) 0:00:14.016 ********** 2025-05-29 01:01:17.336916 | orchestrator | ok: [testbed-node-0] 2025-05-29 01:01:17.336927 | orchestrator | 2025-05-29 01:01:17.336938 | orchestrator | TASK [ceph-facts : read osd pool default crush rule] *************************** 2025-05-29 01:01:17.336949 | orchestrator | Thursday 29 May 2025 01:01:00 +0000 (0:00:00.169) 0:00:14.185 ********** 2025-05-29 01:01:17.336959 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:01:17.336970 | orchestrator | 2025-05-29 01:01:17.336981 | orchestrator | TASK [ceph-facts : set osd_pool_default_crush_rule fact] *********************** 2025-05-29 01:01:17.336992 | orchestrator | Thursday 29 May 2025 01:01:01 +0000 (0:00:00.653) 0:00:14.839 ********** 2025-05-29 01:01:17.337003 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:01:17.337013 | orchestrator | 2025-05-29 01:01:17.337024 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv4] *** 2025-05-29 01:01:17.337035 | orchestrator | Thursday 29 May 2025 01:01:01 +0000 (0:00:00.157) 0:00:14.996 ********** 2025-05-29 01:01:17.337046 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-29 01:01:17.337057 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-29 01:01:17.337068 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-29 01:01:17.337079 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:01:17.337163 | orchestrator | 2025-05-29 01:01:17.337175 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv6] *** 2025-05-29 01:01:17.337185 | orchestrator | Thursday 29 May 2025 01:01:02 +0000 (0:00:00.452) 0:00:15.448 ********** 2025-05-29 01:01:17.337196 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-29 01:01:17.337207 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-29 01:01:17.337218 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-29 01:01:17.337229 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:01:17.337240 | orchestrator | 2025-05-29 01:01:17.337259 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_address] ************* 2025-05-29 01:01:17.337271 | orchestrator | Thursday 29 May 2025 01:01:02 +0000 (0:00:00.459) 0:00:15.908 ********** 2025-05-29 01:01:17.337282 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-29 01:01:17.337293 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-1) 2025-05-29 01:01:17.337304 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-2) 2025-05-29 01:01:17.337315 | orchestrator | 2025-05-29 01:01:17.337326 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_interface - ipv4] **** 2025-05-29 01:01:17.337337 | orchestrator | Thursday 29 May 2025 01:01:03 +0000 (0:00:01.112) 0:00:17.021 ********** 2025-05-29 01:01:17.337347 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-29 01:01:17.337358 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-29 01:01:17.337369 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-29 01:01:17.337380 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:01:17.337391 | orchestrator | 2025-05-29 01:01:17.337401 | orchestrator | TASK [ceph-facts : set_fact _monitor_addresses to monitor_interface - ipv6] **** 2025-05-29 01:01:17.337412 | orchestrator | Thursday 29 May 2025 01:01:04 +0000 (0:00:00.242) 0:00:17.263 ********** 2025-05-29 01:01:17.337432 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-0)  2025-05-29 01:01:17.337443 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-1)  2025-05-29 01:01:17.337453 | orchestrator | skipping: [testbed-node-0] => (item=testbed-node-2)  2025-05-29 01:01:17.337465 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:01:17.337476 | orchestrator | 2025-05-29 01:01:17.337492 | orchestrator | TASK [ceph-facts : set_fact _current_monitor_address] ************************** 2025-05-29 01:01:17.337504 | orchestrator | Thursday 29 May 2025 01:01:04 +0000 (0:00:00.230) 0:00:17.493 ********** 2025-05-29 01:01:17.337515 | orchestrator | ok: [testbed-node-0] => (item={'name': 'testbed-node-0', 'addr': '192.168.16.10'}) 2025-05-29 01:01:17.337527 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'testbed-node-1', 'addr': '192.168.16.11'})  2025-05-29 01:01:17.337538 | orchestrator | skipping: [testbed-node-0] => (item={'name': 'testbed-node-2', 'addr': '192.168.16.12'})  2025-05-29 01:01:17.337549 | orchestrator | 2025-05-29 01:01:17.337560 | orchestrator | TASK [ceph-facts : import_tasks set_radosgw_address.yml] *********************** 2025-05-29 01:01:17.337571 | orchestrator | Thursday 29 May 2025 01:01:04 +0000 (0:00:00.200) 0:00:17.694 ********** 2025-05-29 01:01:17.337583 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:01:17.337594 | orchestrator | 2025-05-29 01:01:17.337605 | orchestrator | TASK [ceph-facts : set_fact use_new_ceph_iscsi package or old ceph-iscsi-config/cli] *** 2025-05-29 01:01:17.337618 | orchestrator | Thursday 29 May 2025 01:01:04 +0000 (0:00:00.145) 0:00:17.839 ********** 2025-05-29 01:01:17.337629 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:01:17.337641 | orchestrator | 2025-05-29 01:01:17.337652 | orchestrator | TASK [ceph-facts : set_fact ceph_run_cmd] ************************************** 2025-05-29 01:01:17.337663 | orchestrator | Thursday 29 May 2025 01:01:04 +0000 (0:00:00.127) 0:00:17.967 ********** 2025-05-29 01:01:17.337674 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-29 01:01:17.337685 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-29 01:01:17.337696 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-29 01:01:17.337707 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2025-05-29 01:01:17.337718 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-05-29 01:01:17.337728 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-05-29 01:01:17.337740 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-05-29 01:01:17.337751 | orchestrator | 2025-05-29 01:01:17.337762 | orchestrator | TASK [ceph-facts : set_fact ceph_admin_command] ******************************** 2025-05-29 01:01:17.337773 | orchestrator | Thursday 29 May 2025 01:01:05 +0000 (0:00:01.060) 0:00:19.027 ********** 2025-05-29 01:01:17.337784 | orchestrator | ok: [testbed-node-0] => (item=testbed-node-0) 2025-05-29 01:01:17.337795 | orchestrator | ok: [testbed-node-0 -> testbed-node-1(192.168.16.11)] => (item=testbed-node-1) 2025-05-29 01:01:17.337806 | orchestrator | ok: [testbed-node-0 -> testbed-node-2(192.168.16.12)] => (item=testbed-node-2) 2025-05-29 01:01:17.337816 | orchestrator | ok: [testbed-node-0 -> testbed-node-3(192.168.16.13)] => (item=testbed-node-3) 2025-05-29 01:01:17.337828 | orchestrator | ok: [testbed-node-0 -> testbed-node-4(192.168.16.14)] => (item=testbed-node-4) 2025-05-29 01:01:17.337839 | orchestrator | ok: [testbed-node-0 -> testbed-node-5(192.168.16.15)] => (item=testbed-node-5) 2025-05-29 01:01:17.337849 | orchestrator | ok: [testbed-node-0 -> testbed-manager(192.168.16.5)] => (item=testbed-manager) 2025-05-29 01:01:17.337858 | orchestrator | 2025-05-29 01:01:17.337868 | orchestrator | TASK [ceph-fetch-keys : lookup keys in /etc/ceph] ****************************** 2025-05-29 01:01:17.337878 | orchestrator | Thursday 29 May 2025 01:01:07 +0000 (0:00:01.556) 0:00:20.584 ********** 2025-05-29 01:01:17.337887 | orchestrator | ok: [testbed-node-0] 2025-05-29 01:01:17.337902 | orchestrator | 2025-05-29 01:01:17.337912 | orchestrator | TASK [ceph-fetch-keys : create a local fetch directory if it does not exist] *** 2025-05-29 01:01:17.337922 | orchestrator | Thursday 29 May 2025 01:01:07 +0000 (0:00:00.511) 0:00:21.096 ********** 2025-05-29 01:01:17.337931 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-29 01:01:17.337941 | orchestrator | 2025-05-29 01:01:17.337951 | orchestrator | TASK [ceph-fetch-keys : copy ceph user and bootstrap keys to the ansible server in /share/11111111-1111-1111-1111-111111111111/] *** 2025-05-29 01:01:17.337961 | orchestrator | Thursday 29 May 2025 01:01:08 +0000 (0:00:00.627) 0:00:21.723 ********** 2025-05-29 01:01:17.337976 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.admin.keyring) 2025-05-29 01:01:17.337986 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.cinder-backup.keyring) 2025-05-29 01:01:17.337995 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.cinder.keyring) 2025-05-29 01:01:17.338005 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.crash.keyring) 2025-05-29 01:01:17.338014 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.glance.keyring) 2025-05-29 01:01:17.338079 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.gnocchi.keyring) 2025-05-29 01:01:17.338105 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.manila.keyring) 2025-05-29 01:01:17.338115 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.client.nova.keyring) 2025-05-29 01:01:17.338125 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.mgr.testbed-node-0.keyring) 2025-05-29 01:01:17.338134 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.mgr.testbed-node-1.keyring) 2025-05-29 01:01:17.338144 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.mgr.testbed-node-2.keyring) 2025-05-29 01:01:17.338154 | orchestrator | changed: [testbed-node-0] => (item=/etc/ceph/ceph.mon.keyring) 2025-05-29 01:01:17.338163 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-osd/ceph.keyring) 2025-05-29 01:01:17.338178 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rgw/ceph.keyring) 2025-05-29 01:01:17.338188 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mds/ceph.keyring) 2025-05-29 01:01:17.338197 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-rbd/ceph.keyring) 2025-05-29 01:01:17.338207 | orchestrator | changed: [testbed-node-0] => (item=/var/lib/ceph/bootstrap-mgr/ceph.keyring) 2025-05-29 01:01:17.338217 | orchestrator | 2025-05-29 01:01:17.338226 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-29 01:01:17.338236 | orchestrator | testbed-node-0 : ok=28  changed=3  unreachable=0 failed=0 skipped=27  rescued=0 ignored=0 2025-05-29 01:01:17.338247 | orchestrator | 2025-05-29 01:01:17.338257 | orchestrator | 2025-05-29 01:01:17.338266 | orchestrator | 2025-05-29 01:01:17.338276 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-29 01:01:17.338286 | orchestrator | Thursday 29 May 2025 01:01:14 +0000 (0:00:05.945) 0:00:27.669 ********** 2025-05-29 01:01:17.338295 | orchestrator | =============================================================================== 2025-05-29 01:01:17.338305 | orchestrator | ceph-fetch-keys : copy ceph user and bootstrap keys to the ansible server in /share/11111111-1111-1111-1111-111111111111/ --- 5.95s 2025-05-29 01:01:17.338315 | orchestrator | ceph-facts : find a running mon container ------------------------------- 1.94s 2025-05-29 01:01:17.338324 | orchestrator | ceph-facts : get current fsid if cluster is already running ------------- 1.66s 2025-05-29 01:01:17.338334 | orchestrator | ceph-facts : set_fact ceph_admin_command -------------------------------- 1.56s 2025-05-29 01:01:17.338343 | orchestrator | ceph-facts : set_fact _monitor_addresses to monitor_address ------------- 1.11s 2025-05-29 01:01:17.338353 | orchestrator | ceph-facts : set_fact ceph_run_cmd -------------------------------------- 1.06s 2025-05-29 01:01:17.338363 | orchestrator | ceph-facts : convert grafana-server group name if exist ----------------- 0.86s 2025-05-29 01:01:17.338379 | orchestrator | ceph-facts : check if the ceph mon socket is in-use --------------------- 0.83s 2025-05-29 01:01:17.338389 | orchestrator | ceph-facts : set_fact monitor_name ansible_facts['hostname'] ------------ 0.70s 2025-05-29 01:01:17.338399 | orchestrator | ceph-facts : read osd pool default crush rule --------------------------- 0.65s 2025-05-29 01:01:17.338408 | orchestrator | ceph-fetch-keys : create a local fetch directory if it does not exist --- 0.63s 2025-05-29 01:01:17.338418 | orchestrator | ceph-facts : check if it is atomic host --------------------------------- 0.59s 2025-05-29 01:01:17.338427 | orchestrator | ceph-fetch-keys : lookup keys in /etc/ceph ------------------------------ 0.51s 2025-05-29 01:01:17.338437 | orchestrator | ceph-facts : read osd pool default crush rule --------------------------- 0.49s 2025-05-29 01:01:17.338446 | orchestrator | ceph-facts : check if the ceph conf exists ------------------------------ 0.47s 2025-05-29 01:01:17.338456 | orchestrator | ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv6 --- 0.46s 2025-05-29 01:01:17.338465 | orchestrator | ceph-facts : set_fact _monitor_addresses to monitor_address_block ipv4 --- 0.45s 2025-05-29 01:01:17.338475 | orchestrator | ceph-facts : check if podman binary is present -------------------------- 0.43s 2025-05-29 01:01:17.338484 | orchestrator | ceph-facts : check for a ceph mon socket -------------------------------- 0.42s 2025-05-29 01:01:17.338494 | orchestrator | ceph-facts : set_fact ceph_release ceph_stable_release ------------------ 0.32s 2025-05-29 01:01:17.338504 | orchestrator | 2025-05-29 01:01:17 | INFO  | Task 714be9b6-46c8-4528-8b44-60523f1aad38 is in state STARTED 2025-05-29 01:01:17.338514 | orchestrator | 2025-05-29 01:01:17 | INFO  | Task 6186519a-80c7-4062-9cf9-1e1244758c56 is in state STARTED 2025-05-29 01:01:17.339038 | orchestrator | 2025-05-29 01:01:17 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:01:17.340015 | orchestrator | 2025-05-29 01:01:17 | INFO  | Task 33e237ae-9122-4c0f-bab8-04edc8d678d8 is in state STARTED 2025-05-29 01:01:17.340052 | orchestrator | 2025-05-29 01:01:17 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:01:20.389240 | orchestrator | 2025-05-29 01:01:20 | INFO  | Task fad8fe39-91d6-4524-ba57-a61647581727 is in state STARTED 2025-05-29 01:01:20.390693 | orchestrator | 2025-05-29 01:01:20 | INFO  | Task e577f5df-e622-48fb-812d-b74ae9042bd6 is in state STARTED 2025-05-29 01:01:20.391728 | orchestrator | 2025-05-29 01:01:20 | INFO  | Task dc61d1c4-d63c-4661-bb18-14b1db4b37b8 is in state STARTED 2025-05-29 01:01:20.392985 | orchestrator | 2025-05-29 01:01:20 | INFO  | Task 714be9b6-46c8-4528-8b44-60523f1aad38 is in state STARTED 2025-05-29 01:01:20.394066 | orchestrator | 2025-05-29 01:01:20 | INFO  | Task 6186519a-80c7-4062-9cf9-1e1244758c56 is in state STARTED 2025-05-29 01:01:20.395064 | orchestrator | 2025-05-29 01:01:20 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:01:20.397225 | orchestrator | 2025-05-29 01:01:20 | INFO  | Task 33e237ae-9122-4c0f-bab8-04edc8d678d8 is in state SUCCESS 2025-05-29 01:01:20.397821 | orchestrator | 2025-05-29 01:01:20 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:01:23.448561 | orchestrator | 2025-05-29 01:01:23 | INFO  | Task fad8fe39-91d6-4524-ba57-a61647581727 is in state STARTED 2025-05-29 01:01:23.450751 | orchestrator | 2025-05-29 01:01:23 | INFO  | Task e577f5df-e622-48fb-812d-b74ae9042bd6 is in state STARTED 2025-05-29 01:01:23.452430 | orchestrator | 2025-05-29 01:01:23 | INFO  | Task dc61d1c4-d63c-4661-bb18-14b1db4b37b8 is in state STARTED 2025-05-29 01:01:23.454289 | orchestrator | 2025-05-29 01:01:23 | INFO  | Task 714be9b6-46c8-4528-8b44-60523f1aad38 is in state STARTED 2025-05-29 01:01:23.455979 | orchestrator | 2025-05-29 01:01:23 | INFO  | Task 6186519a-80c7-4062-9cf9-1e1244758c56 is in state STARTED 2025-05-29 01:01:23.457734 | orchestrator | 2025-05-29 01:01:23 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:01:23.457759 | orchestrator | 2025-05-29 01:01:23 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:01:26.518354 | orchestrator | 2025-05-29 01:01:26 | INFO  | Task fad8fe39-91d6-4524-ba57-a61647581727 is in state STARTED 2025-05-29 01:01:26.519333 | orchestrator | 2025-05-29 01:01:26 | INFO  | Task e577f5df-e622-48fb-812d-b74ae9042bd6 is in state STARTED 2025-05-29 01:01:26.520116 | orchestrator | 2025-05-29 01:01:26 | INFO  | Task dc61d1c4-d63c-4661-bb18-14b1db4b37b8 is in state STARTED 2025-05-29 01:01:26.521044 | orchestrator | 2025-05-29 01:01:26 | INFO  | Task 714be9b6-46c8-4528-8b44-60523f1aad38 is in state STARTED 2025-05-29 01:01:26.522413 | orchestrator | 2025-05-29 01:01:26 | INFO  | Task 6186519a-80c7-4062-9cf9-1e1244758c56 is in state STARTED 2025-05-29 01:01:26.523283 | orchestrator | 2025-05-29 01:01:26 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:01:26.524190 | orchestrator | 2025-05-29 01:01:26 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:01:29.597607 | orchestrator | 2025-05-29 01:01:29 | INFO  | Task fad8fe39-91d6-4524-ba57-a61647581727 is in state STARTED 2025-05-29 01:01:29.602299 | orchestrator | 2025-05-29 01:01:29 | INFO  | Task e577f5df-e622-48fb-812d-b74ae9042bd6 is in state STARTED 2025-05-29 01:01:29.603895 | orchestrator | 2025-05-29 01:01:29 | INFO  | Task dc61d1c4-d63c-4661-bb18-14b1db4b37b8 is in state STARTED 2025-05-29 01:01:29.606890 | orchestrator | 2025-05-29 01:01:29 | INFO  | Task 714be9b6-46c8-4528-8b44-60523f1aad38 is in state STARTED 2025-05-29 01:01:29.606923 | orchestrator | 2025-05-29 01:01:29 | INFO  | Task 6186519a-80c7-4062-9cf9-1e1244758c56 is in state STARTED 2025-05-29 01:01:29.608666 | orchestrator | 2025-05-29 01:01:29 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:01:29.608689 | orchestrator | 2025-05-29 01:01:29 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:01:32.663634 | orchestrator | 2025-05-29 01:01:32 | INFO  | Task fad8fe39-91d6-4524-ba57-a61647581727 is in state STARTED 2025-05-29 01:01:32.665317 | orchestrator | 2025-05-29 01:01:32 | INFO  | Task e577f5df-e622-48fb-812d-b74ae9042bd6 is in state STARTED 2025-05-29 01:01:32.666694 | orchestrator | 2025-05-29 01:01:32 | INFO  | Task dc61d1c4-d63c-4661-bb18-14b1db4b37b8 is in state STARTED 2025-05-29 01:01:32.669140 | orchestrator | 2025-05-29 01:01:32 | INFO  | Task 714be9b6-46c8-4528-8b44-60523f1aad38 is in state STARTED 2025-05-29 01:01:32.671065 | orchestrator | 2025-05-29 01:01:32 | INFO  | Task 6186519a-80c7-4062-9cf9-1e1244758c56 is in state STARTED 2025-05-29 01:01:32.673278 | orchestrator | 2025-05-29 01:01:32 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:01:32.673341 | orchestrator | 2025-05-29 01:01:32 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:01:35.734198 | orchestrator | 2025-05-29 01:01:35 | INFO  | Task fad8fe39-91d6-4524-ba57-a61647581727 is in state STARTED 2025-05-29 01:01:35.734301 | orchestrator | 2025-05-29 01:01:35 | INFO  | Task e577f5df-e622-48fb-812d-b74ae9042bd6 is in state STARTED 2025-05-29 01:01:35.735830 | orchestrator | 2025-05-29 01:01:35 | INFO  | Task dc61d1c4-d63c-4661-bb18-14b1db4b37b8 is in state STARTED 2025-05-29 01:01:35.737605 | orchestrator | 2025-05-29 01:01:35 | INFO  | Task 714be9b6-46c8-4528-8b44-60523f1aad38 is in state STARTED 2025-05-29 01:01:35.738837 | orchestrator | 2025-05-29 01:01:35 | INFO  | Task 6186519a-80c7-4062-9cf9-1e1244758c56 is in state STARTED 2025-05-29 01:01:35.744867 | orchestrator | 2025-05-29 01:01:35 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:01:35.744911 | orchestrator | 2025-05-29 01:01:35 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:01:38.789929 | orchestrator | 2025-05-29 01:01:38 | INFO  | Task fad8fe39-91d6-4524-ba57-a61647581727 is in state STARTED 2025-05-29 01:01:38.792822 | orchestrator | 2025-05-29 01:01:38 | INFO  | Task e577f5df-e622-48fb-812d-b74ae9042bd6 is in state STARTED 2025-05-29 01:01:38.795751 | orchestrator | 2025-05-29 01:01:38 | INFO  | Task dc61d1c4-d63c-4661-bb18-14b1db4b37b8 is in state STARTED 2025-05-29 01:01:38.797950 | orchestrator | 2025-05-29 01:01:38 | INFO  | Task 714be9b6-46c8-4528-8b44-60523f1aad38 is in state STARTED 2025-05-29 01:01:38.799995 | orchestrator | 2025-05-29 01:01:38 | INFO  | Task 6186519a-80c7-4062-9cf9-1e1244758c56 is in state STARTED 2025-05-29 01:01:38.802115 | orchestrator | 2025-05-29 01:01:38 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:01:38.802146 | orchestrator | 2025-05-29 01:01:38 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:01:41.874947 | orchestrator | 2025-05-29 01:01:41 | INFO  | Task fad8fe39-91d6-4524-ba57-a61647581727 is in state STARTED 2025-05-29 01:01:41.877756 | orchestrator | 2025-05-29 01:01:41 | INFO  | Task e577f5df-e622-48fb-812d-b74ae9042bd6 is in state STARTED 2025-05-29 01:01:41.879500 | orchestrator | 2025-05-29 01:01:41 | INFO  | Task dc61d1c4-d63c-4661-bb18-14b1db4b37b8 is in state STARTED 2025-05-29 01:01:41.881675 | orchestrator | 2025-05-29 01:01:41 | INFO  | Task 714be9b6-46c8-4528-8b44-60523f1aad38 is in state STARTED 2025-05-29 01:01:41.883302 | orchestrator | 2025-05-29 01:01:41 | INFO  | Task 6186519a-80c7-4062-9cf9-1e1244758c56 is in state STARTED 2025-05-29 01:01:41.885358 | orchestrator | 2025-05-29 01:01:41 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:01:41.886242 | orchestrator | 2025-05-29 01:01:41 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:01:44.934899 | orchestrator | 2025-05-29 01:01:44 | INFO  | Task fad8fe39-91d6-4524-ba57-a61647581727 is in state STARTED 2025-05-29 01:01:44.936242 | orchestrator | 2025-05-29 01:01:44 | INFO  | Task e577f5df-e622-48fb-812d-b74ae9042bd6 is in state STARTED 2025-05-29 01:01:44.937647 | orchestrator | 2025-05-29 01:01:44 | INFO  | Task dc61d1c4-d63c-4661-bb18-14b1db4b37b8 is in state STARTED 2025-05-29 01:01:44.940438 | orchestrator | 2025-05-29 01:01:44 | INFO  | Task 714be9b6-46c8-4528-8b44-60523f1aad38 is in state STARTED 2025-05-29 01:01:44.943651 | orchestrator | 2025-05-29 01:01:44 | INFO  | Task 6186519a-80c7-4062-9cf9-1e1244758c56 is in state STARTED 2025-05-29 01:01:44.945034 | orchestrator | 2025-05-29 01:01:44 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:01:44.945075 | orchestrator | 2025-05-29 01:01:44 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:01:47.993764 | orchestrator | 2025-05-29 01:01:47 | INFO  | Task fad8fe39-91d6-4524-ba57-a61647581727 is in state STARTED 2025-05-29 01:01:47.995303 | orchestrator | 2025-05-29 01:01:47 | INFO  | Task e577f5df-e622-48fb-812d-b74ae9042bd6 is in state STARTED 2025-05-29 01:01:47.997467 | orchestrator | 2025-05-29 01:01:47 | INFO  | Task dc61d1c4-d63c-4661-bb18-14b1db4b37b8 is in state STARTED 2025-05-29 01:01:47.999473 | orchestrator | 2025-05-29 01:01:47 | INFO  | Task 714be9b6-46c8-4528-8b44-60523f1aad38 is in state STARTED 2025-05-29 01:01:48.002389 | orchestrator | 2025-05-29 01:01:48 | INFO  | Task 6186519a-80c7-4062-9cf9-1e1244758c56 is in state STARTED 2025-05-29 01:01:48.003809 | orchestrator | 2025-05-29 01:01:48 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:01:48.003859 | orchestrator | 2025-05-29 01:01:48 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:01:51.041417 | orchestrator | 2025-05-29 01:01:51 | INFO  | Task fad8fe39-91d6-4524-ba57-a61647581727 is in state STARTED 2025-05-29 01:01:51.042738 | orchestrator | 2025-05-29 01:01:51 | INFO  | Task e577f5df-e622-48fb-812d-b74ae9042bd6 is in state STARTED 2025-05-29 01:01:51.043540 | orchestrator | 2025-05-29 01:01:51 | INFO  | Task dc61d1c4-d63c-4661-bb18-14b1db4b37b8 is in state SUCCESS 2025-05-29 01:01:51.043867 | orchestrator | 2025-05-29 01:01:51.043893 | orchestrator | 2025-05-29 01:01:51.043905 | orchestrator | PLAY [Copy ceph keys to the configuration repository] ************************** 2025-05-29 01:01:51.043917 | orchestrator | 2025-05-29 01:01:51.043928 | orchestrator | TASK [Check ceph keys] ********************************************************* 2025-05-29 01:01:51.043940 | orchestrator | Thursday 29 May 2025 01:00:38 +0000 (0:00:00.141) 0:00:00.141 ********** 2025-05-29 01:01:51.043951 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2025-05-29 01:01:51.043962 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-05-29 01:01:51.043973 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-05-29 01:01:51.043984 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2025-05-29 01:01:51.044012 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-05-29 01:01:51.044024 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2025-05-29 01:01:51.044035 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2025-05-29 01:01:51.044046 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2025-05-29 01:01:51.044086 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2025-05-29 01:01:51.044097 | orchestrator | 2025-05-29 01:01:51.044108 | orchestrator | TASK [Set _fetch_ceph_keys fact] *********************************************** 2025-05-29 01:01:51.044119 | orchestrator | Thursday 29 May 2025 01:00:41 +0000 (0:00:02.955) 0:00:03.097 ********** 2025-05-29 01:01:51.044130 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.admin.keyring) 2025-05-29 01:01:51.044141 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-05-29 01:01:51.044152 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-05-29 01:01:51.044163 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder-backup.keyring) 2025-05-29 01:01:51.044174 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.cinder.keyring) 2025-05-29 01:01:51.044185 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.nova.keyring) 2025-05-29 01:01:51.044196 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.glance.keyring) 2025-05-29 01:01:51.044207 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.gnocchi.keyring) 2025-05-29 01:01:51.044218 | orchestrator | ok: [testbed-manager -> localhost] => (item=ceph.client.manila.keyring) 2025-05-29 01:01:51.044229 | orchestrator | 2025-05-29 01:01:51.044240 | orchestrator | TASK [Point out that the following task takes some time and does not give any output] *** 2025-05-29 01:01:51.044251 | orchestrator | Thursday 29 May 2025 01:00:41 +0000 (0:00:00.229) 0:00:03.326 ********** 2025-05-29 01:01:51.044262 | orchestrator | ok: [testbed-manager] => { 2025-05-29 01:01:51.044276 | orchestrator |  "msg": "The task 'Fetch ceph keys from the first monitor node' runs an Ansible playbook on the manager. There is no further output of this here. It takes a few minutes for this task to complete." 2025-05-29 01:01:51.044290 | orchestrator | } 2025-05-29 01:01:51.044302 | orchestrator | 2025-05-29 01:01:51.044313 | orchestrator | TASK [Fetch ceph keys from the first monitor node] ***************************** 2025-05-29 01:01:51.044348 | orchestrator | Thursday 29 May 2025 01:00:41 +0000 (0:00:00.148) 0:00:03.475 ********** 2025-05-29 01:01:51.044360 | orchestrator | changed: [testbed-manager] 2025-05-29 01:01:51.044371 | orchestrator | 2025-05-29 01:01:51.044382 | orchestrator | TASK [Copy ceph infrastructure keys to the configuration repository] *********** 2025-05-29 01:01:51.044393 | orchestrator | Thursday 29 May 2025 01:01:15 +0000 (0:00:33.370) 0:00:36.846 ********** 2025-05-29 01:01:51.044406 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.admin.keyring', 'dest': '/opt/configuration/environments/infrastructure/files/ceph/ceph.client.admin.keyring'}) 2025-05-29 01:01:51.044417 | orchestrator | 2025-05-29 01:01:51.044428 | orchestrator | TASK [Copy ceph kolla keys to the configuration repository] ******************** 2025-05-29 01:01:51.044439 | orchestrator | Thursday 29 May 2025 01:01:15 +0000 (0:00:00.504) 0:00:37.350 ********** 2025-05-29 01:01:51.044451 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.cinder.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/cinder/cinder-volume/ceph.client.cinder.keyring'}) 2025-05-29 01:01:51.044463 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.cinder.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup/ceph.client.cinder.keyring'}) 2025-05-29 01:01:51.044474 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.cinder-backup.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/cinder/cinder-backup/ceph.client.cinder-backup.keyring'}) 2025-05-29 01:01:51.044486 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.cinder.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/nova/ceph.client.cinder.keyring'}) 2025-05-29 01:01:51.044501 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.nova.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/nova/ceph.client.nova.keyring'}) 2025-05-29 01:01:51.044526 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.glance.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/glance/ceph.client.glance.keyring'}) 2025-05-29 01:01:51.044542 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.gnocchi.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/gnocchi/ceph.client.gnocchi.keyring'}) 2025-05-29 01:01:51.044555 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.client.manila.keyring', 'dest': '/opt/configuration/environments/kolla/files/overlays/manila/ceph.client.manila.keyring'}) 2025-05-29 01:01:51.044569 | orchestrator | 2025-05-29 01:01:51.044582 | orchestrator | TASK [Copy ceph custom keys to the configuration repository] ******************* 2025-05-29 01:01:51.044601 | orchestrator | Thursday 29 May 2025 01:01:18 +0000 (0:00:02.377) 0:00:39.728 ********** 2025-05-29 01:01:51.044614 | orchestrator | skipping: [testbed-manager] 2025-05-29 01:01:51.044628 | orchestrator | 2025-05-29 01:01:51.044641 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-29 01:01:51.044655 | orchestrator | testbed-manager : ok=6  changed=3  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-29 01:01:51.044667 | orchestrator | 2025-05-29 01:01:51.044678 | orchestrator | Thursday 29 May 2025 01:01:18 +0000 (0:00:00.030) 0:00:39.759 ********** 2025-05-29 01:01:51.044689 | orchestrator | =============================================================================== 2025-05-29 01:01:51.044700 | orchestrator | Fetch ceph keys from the first monitor node ---------------------------- 33.37s 2025-05-29 01:01:51.044711 | orchestrator | Check ceph keys --------------------------------------------------------- 2.96s 2025-05-29 01:01:51.044721 | orchestrator | Copy ceph kolla keys to the configuration repository -------------------- 2.38s 2025-05-29 01:01:51.044733 | orchestrator | Copy ceph infrastructure keys to the configuration repository ----------- 0.50s 2025-05-29 01:01:51.044744 | orchestrator | Set _fetch_ceph_keys fact ----------------------------------------------- 0.23s 2025-05-29 01:01:51.044762 | orchestrator | Point out that the following task takes some time and does not give any output --- 0.15s 2025-05-29 01:01:51.044773 | orchestrator | Copy ceph custom keys to the configuration repository ------------------- 0.03s 2025-05-29 01:01:51.044785 | orchestrator | 2025-05-29 01:01:51.045075 | orchestrator | 2025-05-29 01:01:51 | INFO  | Task 795edf97-8642-4e57-b7b3-f6e28249cf74 is in state STARTED 2025-05-29 01:01:51.046244 | orchestrator | 2025-05-29 01:01:51 | INFO  | Task 714be9b6-46c8-4528-8b44-60523f1aad38 is in state STARTED 2025-05-29 01:01:51.047779 | orchestrator | 2025-05-29 01:01:51 | INFO  | Task 6186519a-80c7-4062-9cf9-1e1244758c56 is in state STARTED 2025-05-29 01:01:51.051007 | orchestrator | 2025-05-29 01:01:51 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:01:51.051032 | orchestrator | 2025-05-29 01:01:51 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:01:54.109752 | orchestrator | 2025-05-29 01:01:54 | INFO  | Task fad8fe39-91d6-4524-ba57-a61647581727 is in state STARTED 2025-05-29 01:01:54.110488 | orchestrator | 2025-05-29 01:01:54 | INFO  | Task e577f5df-e622-48fb-812d-b74ae9042bd6 is in state STARTED 2025-05-29 01:01:54.111632 | orchestrator | 2025-05-29 01:01:54 | INFO  | Task 795edf97-8642-4e57-b7b3-f6e28249cf74 is in state STARTED 2025-05-29 01:01:54.112418 | orchestrator | 2025-05-29 01:01:54 | INFO  | Task 714be9b6-46c8-4528-8b44-60523f1aad38 is in state STARTED 2025-05-29 01:01:54.113306 | orchestrator | 2025-05-29 01:01:54 | INFO  | Task 6186519a-80c7-4062-9cf9-1e1244758c56 is in state STARTED 2025-05-29 01:01:54.114261 | orchestrator | 2025-05-29 01:01:54 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:01:54.114291 | orchestrator | 2025-05-29 01:01:54 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:01:57.150546 | orchestrator | 2025-05-29 01:01:57 | INFO  | Task fad8fe39-91d6-4524-ba57-a61647581727 is in state STARTED 2025-05-29 01:01:57.151332 | orchestrator | 2025-05-29 01:01:57 | INFO  | Task e577f5df-e622-48fb-812d-b74ae9042bd6 is in state STARTED 2025-05-29 01:01:57.152727 | orchestrator | 2025-05-29 01:01:57 | INFO  | Task 795edf97-8642-4e57-b7b3-f6e28249cf74 is in state STARTED 2025-05-29 01:01:57.154502 | orchestrator | 2025-05-29 01:01:57 | INFO  | Task 714be9b6-46c8-4528-8b44-60523f1aad38 is in state STARTED 2025-05-29 01:01:57.155856 | orchestrator | 2025-05-29 01:01:57 | INFO  | Task 6186519a-80c7-4062-9cf9-1e1244758c56 is in state STARTED 2025-05-29 01:01:57.157586 | orchestrator | 2025-05-29 01:01:57 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:01:57.157610 | orchestrator | 2025-05-29 01:01:57 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:02:00.186317 | orchestrator | 2025-05-29 01:02:00 | INFO  | Task fad8fe39-91d6-4524-ba57-a61647581727 is in state STARTED 2025-05-29 01:02:00.186400 | orchestrator | 2025-05-29 01:02:00 | INFO  | Task e577f5df-e622-48fb-812d-b74ae9042bd6 is in state STARTED 2025-05-29 01:02:00.186413 | orchestrator | 2025-05-29 01:02:00 | INFO  | Task 795edf97-8642-4e57-b7b3-f6e28249cf74 is in state STARTED 2025-05-29 01:02:00.186424 | orchestrator | 2025-05-29 01:02:00 | INFO  | Task 714be9b6-46c8-4528-8b44-60523f1aad38 is in state STARTED 2025-05-29 01:02:00.187129 | orchestrator | 2025-05-29 01:02:00 | INFO  | Task 6186519a-80c7-4062-9cf9-1e1244758c56 is in state STARTED 2025-05-29 01:02:00.188055 | orchestrator | 2025-05-29 01:02:00 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:02:00.188236 | orchestrator | 2025-05-29 01:02:00 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:02:03.234902 | orchestrator | 2025-05-29 01:02:03 | INFO  | Task fad8fe39-91d6-4524-ba57-a61647581727 is in state STARTED 2025-05-29 01:02:03.235021 | orchestrator | 2025-05-29 01:02:03 | INFO  | Task e577f5df-e622-48fb-812d-b74ae9042bd6 is in state STARTED 2025-05-29 01:02:03.235735 | orchestrator | 2025-05-29 01:02:03 | INFO  | Task 795edf97-8642-4e57-b7b3-f6e28249cf74 is in state STARTED 2025-05-29 01:02:03.237034 | orchestrator | 2025-05-29 01:02:03 | INFO  | Task 714be9b6-46c8-4528-8b44-60523f1aad38 is in state STARTED 2025-05-29 01:02:03.237601 | orchestrator | 2025-05-29 01:02:03 | INFO  | Task 6186519a-80c7-4062-9cf9-1e1244758c56 is in state STARTED 2025-05-29 01:02:03.239458 | orchestrator | 2025-05-29 01:02:03 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:02:03.239483 | orchestrator | 2025-05-29 01:02:03 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:02:06.266765 | orchestrator | 2025-05-29 01:02:06 | INFO  | Task fad8fe39-91d6-4524-ba57-a61647581727 is in state STARTED 2025-05-29 01:02:06.267208 | orchestrator | 2025-05-29 01:02:06 | INFO  | Task e577f5df-e622-48fb-812d-b74ae9042bd6 is in state STARTED 2025-05-29 01:02:06.267891 | orchestrator | 2025-05-29 01:02:06 | INFO  | Task 795edf97-8642-4e57-b7b3-f6e28249cf74 is in state STARTED 2025-05-29 01:02:06.268752 | orchestrator | 2025-05-29 01:02:06 | INFO  | Task 714be9b6-46c8-4528-8b44-60523f1aad38 is in state STARTED 2025-05-29 01:02:06.270206 | orchestrator | 2025-05-29 01:02:06 | INFO  | Task 6186519a-80c7-4062-9cf9-1e1244758c56 is in state STARTED 2025-05-29 01:02:06.270286 | orchestrator | 2025-05-29 01:02:06 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:02:06.270301 | orchestrator | 2025-05-29 01:02:06 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:02:09.306466 | orchestrator | 2025-05-29 01:02:09 | INFO  | Task fad8fe39-91d6-4524-ba57-a61647581727 is in state STARTED 2025-05-29 01:02:09.306666 | orchestrator | 2025-05-29 01:02:09 | INFO  | Task e577f5df-e622-48fb-812d-b74ae9042bd6 is in state STARTED 2025-05-29 01:02:09.307491 | orchestrator | 2025-05-29 01:02:09 | INFO  | Task 795edf97-8642-4e57-b7b3-f6e28249cf74 is in state STARTED 2025-05-29 01:02:09.309161 | orchestrator | 2025-05-29 01:02:09 | INFO  | Task 714be9b6-46c8-4528-8b44-60523f1aad38 is in state STARTED 2025-05-29 01:02:09.309680 | orchestrator | 2025-05-29 01:02:09 | INFO  | Task 6186519a-80c7-4062-9cf9-1e1244758c56 is in state STARTED 2025-05-29 01:02:09.314242 | orchestrator | 2025-05-29 01:02:09 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:02:09.314267 | orchestrator | 2025-05-29 01:02:09 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:02:12.341124 | orchestrator | 2025-05-29 01:02:12 | INFO  | Task fad8fe39-91d6-4524-ba57-a61647581727 is in state STARTED 2025-05-29 01:02:12.342491 | orchestrator | 2025-05-29 01:02:12 | INFO  | Task e577f5df-e622-48fb-812d-b74ae9042bd6 is in state STARTED 2025-05-29 01:02:12.342521 | orchestrator | 2025-05-29 01:02:12 | INFO  | Task 795edf97-8642-4e57-b7b3-f6e28249cf74 is in state STARTED 2025-05-29 01:02:12.342533 | orchestrator | 2025-05-29 01:02:12 | INFO  | Task 714be9b6-46c8-4528-8b44-60523f1aad38 is in state STARTED 2025-05-29 01:02:12.343133 | orchestrator | 2025-05-29 01:02:12 | INFO  | Task 6186519a-80c7-4062-9cf9-1e1244758c56 is in state STARTED 2025-05-29 01:02:12.343566 | orchestrator | 2025-05-29 01:02:12 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:02:12.343585 | orchestrator | 2025-05-29 01:02:12 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:02:15.376225 | orchestrator | 2025-05-29 01:02:15 | INFO  | Task fad8fe39-91d6-4524-ba57-a61647581727 is in state SUCCESS 2025-05-29 01:02:15.376364 | orchestrator | 2025-05-29 01:02:15 | INFO  | Task e577f5df-e622-48fb-812d-b74ae9042bd6 is in state STARTED 2025-05-29 01:02:15.376805 | orchestrator | 2025-05-29 01:02:15 | INFO  | Task 795edf97-8642-4e57-b7b3-f6e28249cf74 is in state STARTED 2025-05-29 01:02:15.377211 | orchestrator | 2025-05-29 01:02:15 | INFO  | Task 714be9b6-46c8-4528-8b44-60523f1aad38 is in state STARTED 2025-05-29 01:02:15.377868 | orchestrator | 2025-05-29 01:02:15 | INFO  | Task 6186519a-80c7-4062-9cf9-1e1244758c56 is in state STARTED 2025-05-29 01:02:15.378377 | orchestrator | 2025-05-29 01:02:15 | INFO  | Task 38b12fb8-6c06-49bb-ba8d-aba5efdbc60c is in state STARTED 2025-05-29 01:02:15.379176 | orchestrator | 2025-05-29 01:02:15 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:02:15.379213 | orchestrator | 2025-05-29 01:02:15 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:02:18.404716 | orchestrator | 2025-05-29 01:02:18 | INFO  | Task e577f5df-e622-48fb-812d-b74ae9042bd6 is in state STARTED 2025-05-29 01:02:18.404820 | orchestrator | 2025-05-29 01:02:18 | INFO  | Task 795edf97-8642-4e57-b7b3-f6e28249cf74 is in state STARTED 2025-05-29 01:02:18.414153 | orchestrator | 2025-05-29 01:02:18 | INFO  | Task 714be9b6-46c8-4528-8b44-60523f1aad38 is in state STARTED 2025-05-29 01:02:18.414246 | orchestrator | 2025-05-29 01:02:18 | INFO  | Task 6186519a-80c7-4062-9cf9-1e1244758c56 is in state STARTED 2025-05-29 01:02:18.414261 | orchestrator | 2025-05-29 01:02:18 | INFO  | Task 38b12fb8-6c06-49bb-ba8d-aba5efdbc60c is in state STARTED 2025-05-29 01:02:18.414273 | orchestrator | 2025-05-29 01:02:18 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:02:18.414285 | orchestrator | 2025-05-29 01:02:18 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:02:21.447079 | orchestrator | 2025-05-29 01:02:21 | INFO  | Task e577f5df-e622-48fb-812d-b74ae9042bd6 is in state STARTED 2025-05-29 01:02:21.447191 | orchestrator | 2025-05-29 01:02:21 | INFO  | Task 795edf97-8642-4e57-b7b3-f6e28249cf74 is in state STARTED 2025-05-29 01:02:21.455501 | orchestrator | 2025-05-29 01:02:21 | INFO  | Task 714be9b6-46c8-4528-8b44-60523f1aad38 is in state STARTED 2025-05-29 01:02:21.455552 | orchestrator | 2025-05-29 01:02:21 | INFO  | Task 6186519a-80c7-4062-9cf9-1e1244758c56 is in state STARTED 2025-05-29 01:02:21.455564 | orchestrator | 2025-05-29 01:02:21 | INFO  | Task 38b12fb8-6c06-49bb-ba8d-aba5efdbc60c is in state STARTED 2025-05-29 01:02:21.455575 | orchestrator | 2025-05-29 01:02:21 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:02:21.455587 | orchestrator | 2025-05-29 01:02:21 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:02:24.486583 | orchestrator | 2025-05-29 01:02:24 | INFO  | Task e577f5df-e622-48fb-812d-b74ae9042bd6 is in state STARTED 2025-05-29 01:02:24.487229 | orchestrator | 2025-05-29 01:02:24 | INFO  | Task 795edf97-8642-4e57-b7b3-f6e28249cf74 is in state STARTED 2025-05-29 01:02:24.487615 | orchestrator | 2025-05-29 01:02:24 | INFO  | Task 714be9b6-46c8-4528-8b44-60523f1aad38 is in state STARTED 2025-05-29 01:02:24.488402 | orchestrator | 2025-05-29 01:02:24 | INFO  | Task 6186519a-80c7-4062-9cf9-1e1244758c56 is in state STARTED 2025-05-29 01:02:24.489608 | orchestrator | 2025-05-29 01:02:24 | INFO  | Task 38b12fb8-6c06-49bb-ba8d-aba5efdbc60c is in state STARTED 2025-05-29 01:02:24.489878 | orchestrator | 2025-05-29 01:02:24 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:02:24.490115 | orchestrator | 2025-05-29 01:02:24 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:02:27.516990 | orchestrator | 2025-05-29 01:02:27 | INFO  | Task e577f5df-e622-48fb-812d-b74ae9042bd6 is in state STARTED 2025-05-29 01:02:27.517906 | orchestrator | 2025-05-29 01:02:27 | INFO  | Task 795edf97-8642-4e57-b7b3-f6e28249cf74 is in state STARTED 2025-05-29 01:02:27.518323 | orchestrator | 2025-05-29 01:02:27 | INFO  | Task 714be9b6-46c8-4528-8b44-60523f1aad38 is in state STARTED 2025-05-29 01:02:27.518858 | orchestrator | 2025-05-29 01:02:27 | INFO  | Task 6186519a-80c7-4062-9cf9-1e1244758c56 is in state STARTED 2025-05-29 01:02:27.519446 | orchestrator | 2025-05-29 01:02:27 | INFO  | Task 38b12fb8-6c06-49bb-ba8d-aba5efdbc60c is in state STARTED 2025-05-29 01:02:27.520079 | orchestrator | 2025-05-29 01:02:27 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:02:27.520101 | orchestrator | 2025-05-29 01:02:27 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:02:30.547644 | orchestrator | 2025-05-29 01:02:30 | INFO  | Task e577f5df-e622-48fb-812d-b74ae9042bd6 is in state STARTED 2025-05-29 01:02:30.547748 | orchestrator | 2025-05-29 01:02:30 | INFO  | Task 795edf97-8642-4e57-b7b3-f6e28249cf74 is in state STARTED 2025-05-29 01:02:30.548177 | orchestrator | 2025-05-29 01:02:30 | INFO  | Task 714be9b6-46c8-4528-8b44-60523f1aad38 is in state STARTED 2025-05-29 01:02:30.548811 | orchestrator | 2025-05-29 01:02:30 | INFO  | Task 6186519a-80c7-4062-9cf9-1e1244758c56 is in state STARTED 2025-05-29 01:02:30.549195 | orchestrator | 2025-05-29 01:02:30 | INFO  | Task 38b12fb8-6c06-49bb-ba8d-aba5efdbc60c is in state STARTED 2025-05-29 01:02:30.550162 | orchestrator | 2025-05-29 01:02:30 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:02:30.550282 | orchestrator | 2025-05-29 01:02:30 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:02:33.576770 | orchestrator | 2025-05-29 01:02:33 | INFO  | Task e577f5df-e622-48fb-812d-b74ae9042bd6 is in state STARTED 2025-05-29 01:02:33.576875 | orchestrator | 2025-05-29 01:02:33 | INFO  | Task 795edf97-8642-4e57-b7b3-f6e28249cf74 is in state STARTED 2025-05-29 01:02:33.576891 | orchestrator | 2025-05-29 01:02:33 | INFO  | Task 714be9b6-46c8-4528-8b44-60523f1aad38 is in state STARTED 2025-05-29 01:02:33.576914 | orchestrator | 2025-05-29 01:02:33 | INFO  | Task 6186519a-80c7-4062-9cf9-1e1244758c56 is in state STARTED 2025-05-29 01:02:33.577418 | orchestrator | 2025-05-29 01:02:33 | INFO  | Task 38b12fb8-6c06-49bb-ba8d-aba5efdbc60c is in state STARTED 2025-05-29 01:02:33.577738 | orchestrator | 2025-05-29 01:02:33 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:02:33.577759 | orchestrator | 2025-05-29 01:02:33 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:02:36.600495 | orchestrator | 2025-05-29 01:02:36 | INFO  | Task e577f5df-e622-48fb-812d-b74ae9042bd6 is in state STARTED 2025-05-29 01:02:36.600580 | orchestrator | 2025-05-29 01:02:36 | INFO  | Task 795edf97-8642-4e57-b7b3-f6e28249cf74 is in state STARTED 2025-05-29 01:02:36.600655 | orchestrator | 2025-05-29 01:02:36 | INFO  | Task 714be9b6-46c8-4528-8b44-60523f1aad38 is in state STARTED 2025-05-29 01:02:36.600666 | orchestrator | 2025-05-29 01:02:36 | INFO  | Task 6186519a-80c7-4062-9cf9-1e1244758c56 is in state STARTED 2025-05-29 01:02:36.600683 | orchestrator | 2025-05-29 01:02:36 | INFO  | Task 38b12fb8-6c06-49bb-ba8d-aba5efdbc60c is in state STARTED 2025-05-29 01:02:36.601377 | orchestrator | 2025-05-29 01:02:36 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:02:36.601401 | orchestrator | 2025-05-29 01:02:36 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:02:39.631511 | orchestrator | 2025-05-29 01:02:39 | INFO  | Task e577f5df-e622-48fb-812d-b74ae9042bd6 is in state STARTED 2025-05-29 01:02:39.631704 | orchestrator | 2025-05-29 01:02:39 | INFO  | Task 795edf97-8642-4e57-b7b3-f6e28249cf74 is in state STARTED 2025-05-29 01:02:39.632216 | orchestrator | 2025-05-29 01:02:39 | INFO  | Task 714be9b6-46c8-4528-8b44-60523f1aad38 is in state STARTED 2025-05-29 01:02:39.632674 | orchestrator | 2025-05-29 01:02:39 | INFO  | Task 6186519a-80c7-4062-9cf9-1e1244758c56 is in state STARTED 2025-05-29 01:02:39.633185 | orchestrator | 2025-05-29 01:02:39 | INFO  | Task 38b12fb8-6c06-49bb-ba8d-aba5efdbc60c is in state STARTED 2025-05-29 01:02:39.633670 | orchestrator | 2025-05-29 01:02:39 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:02:39.633691 | orchestrator | 2025-05-29 01:02:39 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:02:42.669643 | orchestrator | 2025-05-29 01:02:42 | INFO  | Task e577f5df-e622-48fb-812d-b74ae9042bd6 is in state STARTED 2025-05-29 01:02:42.669731 | orchestrator | 2025-05-29 01:02:42 | INFO  | Task 795edf97-8642-4e57-b7b3-f6e28249cf74 is in state STARTED 2025-05-29 01:02:42.670333 | orchestrator | 2025-05-29 01:02:42 | INFO  | Task 714be9b6-46c8-4528-8b44-60523f1aad38 is in state STARTED 2025-05-29 01:02:42.670775 | orchestrator | 2025-05-29 01:02:42 | INFO  | Task 6186519a-80c7-4062-9cf9-1e1244758c56 is in state STARTED 2025-05-29 01:02:42.671475 | orchestrator | 2025-05-29 01:02:42 | INFO  | Task 38b12fb8-6c06-49bb-ba8d-aba5efdbc60c is in state STARTED 2025-05-29 01:02:42.672203 | orchestrator | 2025-05-29 01:02:42 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:02:42.672224 | orchestrator | 2025-05-29 01:02:42 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:02:45.697987 | orchestrator | 2025-05-29 01:02:45 | INFO  | Task e577f5df-e622-48fb-812d-b74ae9042bd6 is in state STARTED 2025-05-29 01:02:45.698364 | orchestrator | 2025-05-29 01:02:45 | INFO  | Task 795edf97-8642-4e57-b7b3-f6e28249cf74 is in state STARTED 2025-05-29 01:02:45.699315 | orchestrator | 2025-05-29 01:02:45 | INFO  | Task 714be9b6-46c8-4528-8b44-60523f1aad38 is in state STARTED 2025-05-29 01:02:45.700177 | orchestrator | 2025-05-29 01:02:45 | INFO  | Task 6186519a-80c7-4062-9cf9-1e1244758c56 is in state STARTED 2025-05-29 01:02:45.701522 | orchestrator | 2025-05-29 01:02:45 | INFO  | Task 38b12fb8-6c06-49bb-ba8d-aba5efdbc60c is in state STARTED 2025-05-29 01:02:45.701975 | orchestrator | 2025-05-29 01:02:45 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:02:45.702179 | orchestrator | 2025-05-29 01:02:45 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:02:48.742968 | orchestrator | 2025-05-29 01:02:48 | INFO  | Task e577f5df-e622-48fb-812d-b74ae9042bd6 is in state STARTED 2025-05-29 01:02:48.743916 | orchestrator | 2025-05-29 01:02:48 | INFO  | Task 795edf97-8642-4e57-b7b3-f6e28249cf74 is in state STARTED 2025-05-29 01:02:48.743946 | orchestrator | 2025-05-29 01:02:48 | INFO  | Task 714be9b6-46c8-4528-8b44-60523f1aad38 is in state STARTED 2025-05-29 01:02:48.743958 | orchestrator | 2025-05-29 01:02:48 | INFO  | Task 6186519a-80c7-4062-9cf9-1e1244758c56 is in state STARTED 2025-05-29 01:02:48.744617 | orchestrator | 2025-05-29 01:02:48 | INFO  | Task 38b12fb8-6c06-49bb-ba8d-aba5efdbc60c is in state STARTED 2025-05-29 01:02:48.745156 | orchestrator | 2025-05-29 01:02:48 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:02:48.745178 | orchestrator | 2025-05-29 01:02:48 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:02:51.784986 | orchestrator | 2025-05-29 01:02:51 | INFO  | Task e577f5df-e622-48fb-812d-b74ae9042bd6 is in state STARTED 2025-05-29 01:02:51.785195 | orchestrator | 2025-05-29 01:02:51 | INFO  | Task 795edf97-8642-4e57-b7b3-f6e28249cf74 is in state STARTED 2025-05-29 01:02:51.785495 | orchestrator | 2025-05-29 01:02:51 | INFO  | Task 714be9b6-46c8-4528-8b44-60523f1aad38 is in state STARTED 2025-05-29 01:02:51.786497 | orchestrator | 2025-05-29 01:02:51 | INFO  | Task 6186519a-80c7-4062-9cf9-1e1244758c56 is in state STARTED 2025-05-29 01:02:51.786519 | orchestrator | 2025-05-29 01:02:51 | INFO  | Task 38b12fb8-6c06-49bb-ba8d-aba5efdbc60c is in state SUCCESS 2025-05-29 01:02:51.787192 | orchestrator | 2025-05-29 01:02:51.787281 | orchestrator | PLAY [Download ironic ipa images] ********************************************** 2025-05-29 01:02:51.787295 | orchestrator | 2025-05-29 01:02:51.787305 | orchestrator | TASK [Ensure the destination directory exists] ********************************* 2025-05-29 01:02:51.787315 | orchestrator | Thursday 29 May 2025 01:01:15 +0000 (0:00:00.347) 0:00:00.347 ********** 2025-05-29 01:02:51.787324 | orchestrator | changed: [localhost] 2025-05-29 01:02:51.787335 | orchestrator | 2025-05-29 01:02:51.787344 | orchestrator | TASK [Download ironic-agent initramfs] ***************************************** 2025-05-29 01:02:51.787354 | orchestrator | Thursday 29 May 2025 01:01:16 +0000 (0:00:00.820) 0:00:01.168 ********** 2025-05-29 01:02:51.787363 | orchestrator | changed: [localhost] 2025-05-29 01:02:51.787371 | orchestrator | 2025-05-29 01:02:51.787381 | orchestrator | TASK [Download ironic-agent kernel] ******************************************** 2025-05-29 01:02:51.787389 | orchestrator | Thursday 29 May 2025 01:01:44 +0000 (0:00:28.084) 0:00:29.252 ********** 2025-05-29 01:02:51.787398 | orchestrator | changed: [localhost] 2025-05-29 01:02:51.787407 | orchestrator | 2025-05-29 01:02:51.787416 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-29 01:02:51.787425 | orchestrator | 2025-05-29 01:02:51.787434 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-29 01:02:51.787443 | orchestrator | Thursday 29 May 2025 01:01:47 +0000 (0:00:03.606) 0:00:32.859 ********** 2025-05-29 01:02:51.787452 | orchestrator | ok: [testbed-node-0] 2025-05-29 01:02:51.787461 | orchestrator | ok: [testbed-node-1] 2025-05-29 01:02:51.787470 | orchestrator | ok: [testbed-node-2] 2025-05-29 01:02:51.787479 | orchestrator | 2025-05-29 01:02:51.787488 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-29 01:02:51.787497 | orchestrator | Thursday 29 May 2025 01:01:48 +0000 (0:00:00.381) 0:00:33.241 ********** 2025-05-29 01:02:51.787506 | orchestrator | [WARNING]: Could not match supplied host pattern, ignoring: enable_ironic_True 2025-05-29 01:02:51.787515 | orchestrator | ok: [testbed-node-0] => (item=enable_ironic_False) 2025-05-29 01:02:51.787525 | orchestrator | ok: [testbed-node-1] => (item=enable_ironic_False) 2025-05-29 01:02:51.787534 | orchestrator | ok: [testbed-node-2] => (item=enable_ironic_False) 2025-05-29 01:02:51.787543 | orchestrator | 2025-05-29 01:02:51.787552 | orchestrator | PLAY [Apply role ironic] ******************************************************* 2025-05-29 01:02:51.787561 | orchestrator | skipping: no hosts matched 2025-05-29 01:02:51.787571 | orchestrator | 2025-05-29 01:02:51.787580 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-29 01:02:51.787589 | orchestrator | localhost : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-29 01:02:51.787601 | orchestrator | testbed-node-0 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-29 01:02:51.787611 | orchestrator | testbed-node-1 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-29 01:02:51.787620 | orchestrator | testbed-node-2 : ok=2  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-29 01:02:51.787653 | orchestrator | 2025-05-29 01:02:51.787662 | orchestrator | 2025-05-29 01:02:51.787671 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-29 01:02:51.787680 | orchestrator | Thursday 29 May 2025 01:01:48 +0000 (0:00:00.408) 0:00:33.649 ********** 2025-05-29 01:02:51.787689 | orchestrator | =============================================================================== 2025-05-29 01:02:51.787698 | orchestrator | Download ironic-agent initramfs ---------------------------------------- 28.08s 2025-05-29 01:02:51.787707 | orchestrator | Download ironic-agent kernel -------------------------------------------- 3.61s 2025-05-29 01:02:51.787732 | orchestrator | Ensure the destination directory exists --------------------------------- 0.82s 2025-05-29 01:02:51.787741 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.41s 2025-05-29 01:02:51.787752 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.38s 2025-05-29 01:02:51.787762 | orchestrator | 2025-05-29 01:02:51.787773 | orchestrator | 2025-05-29 01:02:51.787783 | orchestrator | PLAY [Apply role cephclient] *************************************************** 2025-05-29 01:02:51.787794 | orchestrator | 2025-05-29 01:02:51.787805 | orchestrator | TASK [osism.services.cephclient : Include container tasks] ********************* 2025-05-29 01:02:51.787815 | orchestrator | Thursday 29 May 2025 01:01:21 +0000 (0:00:00.153) 0:00:00.153 ********** 2025-05-29 01:02:51.787825 | orchestrator | included: /usr/share/ansible/collections/ansible_collections/osism/services/roles/cephclient/tasks/container.yml for testbed-manager 2025-05-29 01:02:51.787836 | orchestrator | 2025-05-29 01:02:51.787846 | orchestrator | TASK [osism.services.cephclient : Create required directories] ***************** 2025-05-29 01:02:51.787856 | orchestrator | Thursday 29 May 2025 01:01:21 +0000 (0:00:00.216) 0:00:00.369 ********** 2025-05-29 01:02:51.787867 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/configuration) 2025-05-29 01:02:51.787997 | orchestrator | changed: [testbed-manager] => (item=/opt/cephclient/data) 2025-05-29 01:02:51.788046 | orchestrator | ok: [testbed-manager] => (item=/opt/cephclient) 2025-05-29 01:02:51.788061 | orchestrator | 2025-05-29 01:02:51.788076 | orchestrator | TASK [osism.services.cephclient : Copy configuration files] ******************** 2025-05-29 01:02:51.788090 | orchestrator | Thursday 29 May 2025 01:01:22 +0000 (0:00:01.264) 0:00:01.634 ********** 2025-05-29 01:02:51.788103 | orchestrator | changed: [testbed-manager] => (item={'src': 'ceph.conf.j2', 'dest': '/opt/cephclient/configuration/ceph.conf'}) 2025-05-29 01:02:51.788116 | orchestrator | 2025-05-29 01:02:51.788129 | orchestrator | TASK [osism.services.cephclient : Copy keyring file] *************************** 2025-05-29 01:02:51.788142 | orchestrator | Thursday 29 May 2025 01:01:23 +0000 (0:00:01.266) 0:00:02.900 ********** 2025-05-29 01:02:51.788171 | orchestrator | changed: [testbed-manager] 2025-05-29 01:02:51.788186 | orchestrator | 2025-05-29 01:02:51.788200 | orchestrator | TASK [osism.services.cephclient : Copy docker-compose.yml file] **************** 2025-05-29 01:02:51.788213 | orchestrator | Thursday 29 May 2025 01:01:24 +0000 (0:00:00.914) 0:00:03.815 ********** 2025-05-29 01:02:51.788229 | orchestrator | changed: [testbed-manager] 2025-05-29 01:02:51.788244 | orchestrator | 2025-05-29 01:02:51.788258 | orchestrator | TASK [osism.services.cephclient : Manage cephclient service] ******************* 2025-05-29 01:02:51.788272 | orchestrator | Thursday 29 May 2025 01:01:25 +0000 (0:00:01.152) 0:00:04.968 ********** 2025-05-29 01:02:51.788287 | orchestrator | FAILED - RETRYING: [testbed-manager]: Manage cephclient service (10 retries left). 2025-05-29 01:02:51.788301 | orchestrator | ok: [testbed-manager] 2025-05-29 01:02:51.788316 | orchestrator | 2025-05-29 01:02:51.788331 | orchestrator | TASK [osism.services.cephclient : Copy wrapper scripts] ************************ 2025-05-29 01:02:51.788346 | orchestrator | Thursday 29 May 2025 01:02:05 +0000 (0:00:39.450) 0:00:44.419 ********** 2025-05-29 01:02:51.788360 | orchestrator | changed: [testbed-manager] => (item=ceph) 2025-05-29 01:02:51.788375 | orchestrator | changed: [testbed-manager] => (item=ceph-authtool) 2025-05-29 01:02:51.788389 | orchestrator | changed: [testbed-manager] => (item=rados) 2025-05-29 01:02:51.788403 | orchestrator | changed: [testbed-manager] => (item=radosgw-admin) 2025-05-29 01:02:51.788435 | orchestrator | changed: [testbed-manager] => (item=rbd) 2025-05-29 01:02:51.788450 | orchestrator | 2025-05-29 01:02:51.788464 | orchestrator | TASK [osism.services.cephclient : Remove old wrapper scripts] ****************** 2025-05-29 01:02:51.788479 | orchestrator | Thursday 29 May 2025 01:02:08 +0000 (0:00:03.211) 0:00:47.630 ********** 2025-05-29 01:02:51.788494 | orchestrator | ok: [testbed-manager] => (item=crushtool) 2025-05-29 01:02:51.788508 | orchestrator | 2025-05-29 01:02:51.788522 | orchestrator | TASK [osism.services.cephclient : Include package tasks] *********************** 2025-05-29 01:02:51.788536 | orchestrator | Thursday 29 May 2025 01:02:08 +0000 (0:00:00.375) 0:00:48.006 ********** 2025-05-29 01:02:51.788550 | orchestrator | skipping: [testbed-manager] 2025-05-29 01:02:51.788564 | orchestrator | 2025-05-29 01:02:51.788579 | orchestrator | TASK [osism.services.cephclient : Include rook task] *************************** 2025-05-29 01:02:51.788593 | orchestrator | Thursday 29 May 2025 01:02:08 +0000 (0:00:00.093) 0:00:48.100 ********** 2025-05-29 01:02:51.788608 | orchestrator | skipping: [testbed-manager] 2025-05-29 01:02:51.788623 | orchestrator | 2025-05-29 01:02:51.788637 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Restart cephclient service] ******* 2025-05-29 01:02:51.788652 | orchestrator | Thursday 29 May 2025 01:02:09 +0000 (0:00:00.206) 0:00:48.306 ********** 2025-05-29 01:02:51.788666 | orchestrator | changed: [testbed-manager] 2025-05-29 01:02:51.788680 | orchestrator | 2025-05-29 01:02:51.788694 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Ensure that all containers are up] *** 2025-05-29 01:02:51.788709 | orchestrator | Thursday 29 May 2025 01:02:10 +0000 (0:00:01.066) 0:00:49.372 ********** 2025-05-29 01:02:51.788723 | orchestrator | changed: [testbed-manager] 2025-05-29 01:02:51.788738 | orchestrator | 2025-05-29 01:02:51.788752 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Wait for an healthy service] ****** 2025-05-29 01:02:51.788766 | orchestrator | Thursday 29 May 2025 01:02:11 +0000 (0:00:00.764) 0:00:50.137 ********** 2025-05-29 01:02:51.788781 | orchestrator | changed: [testbed-manager] 2025-05-29 01:02:51.788795 | orchestrator | 2025-05-29 01:02:51.788810 | orchestrator | RUNNING HANDLER [osism.services.cephclient : Copy bash completion scripts] ***** 2025-05-29 01:02:51.788824 | orchestrator | Thursday 29 May 2025 01:02:11 +0000 (0:00:00.488) 0:00:50.625 ********** 2025-05-29 01:02:51.788839 | orchestrator | ok: [testbed-manager] => (item=ceph) 2025-05-29 01:02:51.788854 | orchestrator | ok: [testbed-manager] => (item=rados) 2025-05-29 01:02:51.788868 | orchestrator | ok: [testbed-manager] => (item=radosgw-admin) 2025-05-29 01:02:51.788883 | orchestrator | ok: [testbed-manager] => (item=rbd) 2025-05-29 01:02:51.788899 | orchestrator | 2025-05-29 01:02:51.788915 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-29 01:02:51.788943 | orchestrator | testbed-manager : ok=12  changed=8  unreachable=0 failed=0 skipped=2  rescued=0 ignored=0 2025-05-29 01:02:51.788954 | orchestrator | 2025-05-29 01:02:51.788963 | orchestrator | Thursday 29 May 2025 01:02:12 +0000 (0:00:01.147) 0:00:51.773 ********** 2025-05-29 01:02:51.788972 | orchestrator | =============================================================================== 2025-05-29 01:02:51.788980 | orchestrator | osism.services.cephclient : Manage cephclient service ------------------ 39.45s 2025-05-29 01:02:51.788989 | orchestrator | osism.services.cephclient : Copy wrapper scripts ------------------------ 3.21s 2025-05-29 01:02:51.788998 | orchestrator | osism.services.cephclient : Copy configuration files -------------------- 1.27s 2025-05-29 01:02:51.789031 | orchestrator | osism.services.cephclient : Create required directories ----------------- 1.26s 2025-05-29 01:02:51.789041 | orchestrator | osism.services.cephclient : Copy docker-compose.yml file ---------------- 1.15s 2025-05-29 01:02:51.789050 | orchestrator | osism.services.cephclient : Copy bash completion scripts ---------------- 1.15s 2025-05-29 01:02:51.789059 | orchestrator | osism.services.cephclient : Restart cephclient service ------------------ 1.07s 2025-05-29 01:02:51.789067 | orchestrator | osism.services.cephclient : Copy keyring file --------------------------- 0.91s 2025-05-29 01:02:51.789076 | orchestrator | osism.services.cephclient : Ensure that all containers are up ----------- 0.76s 2025-05-29 01:02:51.789095 | orchestrator | osism.services.cephclient : Wait for an healthy service ----------------- 0.49s 2025-05-29 01:02:51.789104 | orchestrator | osism.services.cephclient : Remove old wrapper scripts ------------------ 0.38s 2025-05-29 01:02:51.789113 | orchestrator | osism.services.cephclient : Include container tasks --------------------- 0.22s 2025-05-29 01:02:51.789121 | orchestrator | osism.services.cephclient : Include rook task --------------------------- 0.21s 2025-05-29 01:02:51.789130 | orchestrator | osism.services.cephclient : Include package tasks ----------------------- 0.09s 2025-05-29 01:02:51.789139 | orchestrator | 2025-05-29 01:02:51.789159 | orchestrator | 2025-05-29 01:02:51 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:02:51.789169 | orchestrator | 2025-05-29 01:02:51 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:02:54.821886 | orchestrator | 2025-05-29 01:02:54 | INFO  | Task e577f5df-e622-48fb-812d-b74ae9042bd6 is in state STARTED 2025-05-29 01:02:54.822234 | orchestrator | 2025-05-29 01:02:54 | INFO  | Task 795edf97-8642-4e57-b7b3-f6e28249cf74 is in state STARTED 2025-05-29 01:02:54.823081 | orchestrator | 2025-05-29 01:02:54 | INFO  | Task 714be9b6-46c8-4528-8b44-60523f1aad38 is in state STARTED 2025-05-29 01:02:54.824701 | orchestrator | 2025-05-29 01:02:54 | INFO  | Task 6186519a-80c7-4062-9cf9-1e1244758c56 is in state STARTED 2025-05-29 01:02:54.825335 | orchestrator | 2025-05-29 01:02:54 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:02:54.825473 | orchestrator | 2025-05-29 01:02:54 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:02:57.860444 | orchestrator | 2025-05-29 01:02:57 | INFO  | Task e577f5df-e622-48fb-812d-b74ae9042bd6 is in state STARTED 2025-05-29 01:02:57.861087 | orchestrator | 2025-05-29 01:02:57 | INFO  | Task 795edf97-8642-4e57-b7b3-f6e28249cf74 is in state STARTED 2025-05-29 01:02:57.864146 | orchestrator | 2025-05-29 01:02:57 | INFO  | Task 714be9b6-46c8-4528-8b44-60523f1aad38 is in state STARTED 2025-05-29 01:02:57.864741 | orchestrator | 2025-05-29 01:02:57 | INFO  | Task 6186519a-80c7-4062-9cf9-1e1244758c56 is in state STARTED 2025-05-29 01:02:57.865209 | orchestrator | 2025-05-29 01:02:57 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:02:57.865307 | orchestrator | 2025-05-29 01:02:57 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:03:00.908943 | orchestrator | 2025-05-29 01:03:00 | INFO  | Task e577f5df-e622-48fb-812d-b74ae9042bd6 is in state STARTED 2025-05-29 01:03:00.909499 | orchestrator | 2025-05-29 01:03:00 | INFO  | Task 795edf97-8642-4e57-b7b3-f6e28249cf74 is in state STARTED 2025-05-29 01:03:00.910844 | orchestrator | 2025-05-29 01:03:00 | INFO  | Task 714be9b6-46c8-4528-8b44-60523f1aad38 is in state STARTED 2025-05-29 01:03:00.911708 | orchestrator | 2025-05-29 01:03:00 | INFO  | Task 6186519a-80c7-4062-9cf9-1e1244758c56 is in state STARTED 2025-05-29 01:03:00.912668 | orchestrator | 2025-05-29 01:03:00 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:03:00.912714 | orchestrator | 2025-05-29 01:03:00 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:03:03.960952 | orchestrator | 2025-05-29 01:03:03 | INFO  | Task e577f5df-e622-48fb-812d-b74ae9042bd6 is in state STARTED 2025-05-29 01:03:03.961840 | orchestrator | 2025-05-29 01:03:03 | INFO  | Task 795edf97-8642-4e57-b7b3-f6e28249cf74 is in state STARTED 2025-05-29 01:03:03.962829 | orchestrator | 2025-05-29 01:03:03 | INFO  | Task 714be9b6-46c8-4528-8b44-60523f1aad38 is in state STARTED 2025-05-29 01:03:03.964896 | orchestrator | 2025-05-29 01:03:03 | INFO  | Task 6186519a-80c7-4062-9cf9-1e1244758c56 is in state STARTED 2025-05-29 01:03:03.965436 | orchestrator | 2025-05-29 01:03:03 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:03:03.965466 | orchestrator | 2025-05-29 01:03:03 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:03:07.016970 | orchestrator | 2025-05-29 01:03:07 | INFO  | Task e577f5df-e622-48fb-812d-b74ae9042bd6 is in state STARTED 2025-05-29 01:03:07.017249 | orchestrator | 2025-05-29 01:03:07 | INFO  | Task 795edf97-8642-4e57-b7b3-f6e28249cf74 is in state STARTED 2025-05-29 01:03:07.017983 | orchestrator | 2025-05-29 01:03:07 | INFO  | Task 714be9b6-46c8-4528-8b44-60523f1aad38 is in state STARTED 2025-05-29 01:03:07.023088 | orchestrator | 2025-05-29 01:03:07 | INFO  | Task 6186519a-80c7-4062-9cf9-1e1244758c56 is in state STARTED 2025-05-29 01:03:07.023178 | orchestrator | 2025-05-29 01:03:07 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:03:07.023193 | orchestrator | 2025-05-29 01:03:07 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:03:10.044309 | orchestrator | 2025-05-29 01:03:10 | INFO  | Task e577f5df-e622-48fb-812d-b74ae9042bd6 is in state STARTED 2025-05-29 01:03:10.044426 | orchestrator | 2025-05-29 01:03:10 | INFO  | Task 795edf97-8642-4e57-b7b3-f6e28249cf74 is in state STARTED 2025-05-29 01:03:10.044441 | orchestrator | 2025-05-29 01:03:10 | INFO  | Task 714be9b6-46c8-4528-8b44-60523f1aad38 is in state STARTED 2025-05-29 01:03:10.044453 | orchestrator | 2025-05-29 01:03:10 | INFO  | Task 6186519a-80c7-4062-9cf9-1e1244758c56 is in state STARTED 2025-05-29 01:03:10.044477 | orchestrator | 2025-05-29 01:03:10 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:03:10.044489 | orchestrator | 2025-05-29 01:03:10 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:03:13.080287 | orchestrator | 2025-05-29 01:03:13 | INFO  | Task e577f5df-e622-48fb-812d-b74ae9042bd6 is in state STARTED 2025-05-29 01:03:13.080480 | orchestrator | 2025-05-29 01:03:13 | INFO  | Task 795edf97-8642-4e57-b7b3-f6e28249cf74 is in state STARTED 2025-05-29 01:03:13.082494 | orchestrator | 2025-05-29 01:03:13 | INFO  | Task 714be9b6-46c8-4528-8b44-60523f1aad38 is in state STARTED 2025-05-29 01:03:13.082524 | orchestrator | 2025-05-29 01:03:13 | INFO  | Task 6186519a-80c7-4062-9cf9-1e1244758c56 is in state STARTED 2025-05-29 01:03:13.082536 | orchestrator | 2025-05-29 01:03:13 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:03:13.082549 | orchestrator | 2025-05-29 01:03:13 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:03:16.123746 | orchestrator | 2025-05-29 01:03:16 | INFO  | Task e577f5df-e622-48fb-812d-b74ae9042bd6 is in state STARTED 2025-05-29 01:03:16.124968 | orchestrator | 2025-05-29 01:03:16 | INFO  | Task 795edf97-8642-4e57-b7b3-f6e28249cf74 is in state STARTED 2025-05-29 01:03:16.126664 | orchestrator | 2025-05-29 01:03:16 | INFO  | Task 714be9b6-46c8-4528-8b44-60523f1aad38 is in state STARTED 2025-05-29 01:03:16.127752 | orchestrator | 2025-05-29 01:03:16 | INFO  | Task 6186519a-80c7-4062-9cf9-1e1244758c56 is in state STARTED 2025-05-29 01:03:16.128913 | orchestrator | 2025-05-29 01:03:16 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:03:16.128938 | orchestrator | 2025-05-29 01:03:16 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:03:19.178187 | orchestrator | 2025-05-29 01:03:19 | INFO  | Task e577f5df-e622-48fb-812d-b74ae9042bd6 is in state STARTED 2025-05-29 01:03:19.178321 | orchestrator | 2025-05-29 01:03:19 | INFO  | Task 795edf97-8642-4e57-b7b3-f6e28249cf74 is in state STARTED 2025-05-29 01:03:19.178585 | orchestrator | 2025-05-29 01:03:19 | INFO  | Task 714be9b6-46c8-4528-8b44-60523f1aad38 is in state STARTED 2025-05-29 01:03:19.179411 | orchestrator | 2025-05-29 01:03:19 | INFO  | Task 6186519a-80c7-4062-9cf9-1e1244758c56 is in state STARTED 2025-05-29 01:03:19.181388 | orchestrator | 2025-05-29 01:03:19 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:03:19.181419 | orchestrator | 2025-05-29 01:03:19 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:03:22.224786 | orchestrator | 2025-05-29 01:03:22 | INFO  | Task e577f5df-e622-48fb-812d-b74ae9042bd6 is in state STARTED 2025-05-29 01:03:22.224877 | orchestrator | 2025-05-29 01:03:22 | INFO  | Task c5a4286f-cc55-400d-a696-dbd74f6afb4d is in state STARTED 2025-05-29 01:03:22.225688 | orchestrator | 2025-05-29 01:03:22 | INFO  | Task 795edf97-8642-4e57-b7b3-f6e28249cf74 is in state SUCCESS 2025-05-29 01:03:22.228057 | orchestrator | [WARNING]: Collection osism.commons does not support Ansible version 2.15.12 2025-05-29 01:03:22.228098 | orchestrator | 2025-05-29 01:03:22.228107 | orchestrator | PLAY [Bootstraph ceph dashboard] *********************************************** 2025-05-29 01:03:22.228115 | orchestrator | 2025-05-29 01:03:22.228122 | orchestrator | TASK [Disable the ceph dashboard] ********************************************** 2025-05-29 01:03:22.228129 | orchestrator | Thursday 29 May 2025 01:02:16 +0000 (0:00:00.392) 0:00:00.392 ********** 2025-05-29 01:03:22.228137 | orchestrator | changed: [testbed-manager] 2025-05-29 01:03:22.228145 | orchestrator | 2025-05-29 01:03:22.228152 | orchestrator | TASK [Set mgr/dashboard/ssl to false] ****************************************** 2025-05-29 01:03:22.228160 | orchestrator | Thursday 29 May 2025 01:02:18 +0000 (0:00:01.847) 0:00:02.239 ********** 2025-05-29 01:03:22.228167 | orchestrator | changed: [testbed-manager] 2025-05-29 01:03:22.228174 | orchestrator | 2025-05-29 01:03:22.228181 | orchestrator | TASK [Set mgr/dashboard/server_port to 7000] *********************************** 2025-05-29 01:03:22.228189 | orchestrator | Thursday 29 May 2025 01:02:18 +0000 (0:00:00.850) 0:00:03.090 ********** 2025-05-29 01:03:22.228195 | orchestrator | changed: [testbed-manager] 2025-05-29 01:03:22.228202 | orchestrator | 2025-05-29 01:03:22.228210 | orchestrator | TASK [Set mgr/dashboard/server_addr to 0.0.0.0] ******************************** 2025-05-29 01:03:22.228217 | orchestrator | Thursday 29 May 2025 01:02:19 +0000 (0:00:00.821) 0:00:03.911 ********** 2025-05-29 01:03:22.228224 | orchestrator | changed: [testbed-manager] 2025-05-29 01:03:22.228232 | orchestrator | 2025-05-29 01:03:22.228239 | orchestrator | TASK [Set mgr/dashboard/standby_behaviour to error] **************************** 2025-05-29 01:03:22.228247 | orchestrator | Thursday 29 May 2025 01:02:20 +0000 (0:00:00.817) 0:00:04.729 ********** 2025-05-29 01:03:22.228255 | orchestrator | changed: [testbed-manager] 2025-05-29 01:03:22.228262 | orchestrator | 2025-05-29 01:03:22.228268 | orchestrator | TASK [Set mgr/dashboard/standby_error_status_code to 404] ********************** 2025-05-29 01:03:22.228275 | orchestrator | Thursday 29 May 2025 01:02:21 +0000 (0:00:00.805) 0:00:05.535 ********** 2025-05-29 01:03:22.228282 | orchestrator | changed: [testbed-manager] 2025-05-29 01:03:22.228290 | orchestrator | 2025-05-29 01:03:22.228296 | orchestrator | TASK [Enable the ceph dashboard] *********************************************** 2025-05-29 01:03:22.228304 | orchestrator | Thursday 29 May 2025 01:02:22 +0000 (0:00:01.012) 0:00:06.548 ********** 2025-05-29 01:03:22.228311 | orchestrator | changed: [testbed-manager] 2025-05-29 01:03:22.228317 | orchestrator | 2025-05-29 01:03:22.228323 | orchestrator | TASK [Write ceph_dashboard_password to temporary file] ************************* 2025-05-29 01:03:22.228329 | orchestrator | Thursday 29 May 2025 01:02:23 +0000 (0:00:01.094) 0:00:07.642 ********** 2025-05-29 01:03:22.228335 | orchestrator | changed: [testbed-manager] 2025-05-29 01:03:22.228342 | orchestrator | 2025-05-29 01:03:22.228349 | orchestrator | TASK [Create admin user] ******************************************************* 2025-05-29 01:03:22.228356 | orchestrator | Thursday 29 May 2025 01:02:24 +0000 (0:00:01.135) 0:00:08.778 ********** 2025-05-29 01:03:22.228363 | orchestrator | changed: [testbed-manager] 2025-05-29 01:03:22.228388 | orchestrator | 2025-05-29 01:03:22.228395 | orchestrator | TASK [Remove temporary file for ceph_dashboard_password] *********************** 2025-05-29 01:03:22.228402 | orchestrator | Thursday 29 May 2025 01:02:42 +0000 (0:00:18.088) 0:00:26.866 ********** 2025-05-29 01:03:22.228409 | orchestrator | skipping: [testbed-manager] 2025-05-29 01:03:22.228416 | orchestrator | 2025-05-29 01:03:22.228423 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-05-29 01:03:22.228430 | orchestrator | 2025-05-29 01:03:22.228437 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-05-29 01:03:22.228444 | orchestrator | Thursday 29 May 2025 01:02:43 +0000 (0:00:00.606) 0:00:27.473 ********** 2025-05-29 01:03:22.228451 | orchestrator | changed: [testbed-node-0] 2025-05-29 01:03:22.228458 | orchestrator | 2025-05-29 01:03:22.228465 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-05-29 01:03:22.228472 | orchestrator | 2025-05-29 01:03:22.228479 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-05-29 01:03:22.228485 | orchestrator | Thursday 29 May 2025 01:02:45 +0000 (0:00:02.108) 0:00:29.582 ********** 2025-05-29 01:03:22.228493 | orchestrator | changed: [testbed-node-1] 2025-05-29 01:03:22.228499 | orchestrator | 2025-05-29 01:03:22.228507 | orchestrator | PLAY [Restart ceph manager services] ******************************************* 2025-05-29 01:03:22.228514 | orchestrator | 2025-05-29 01:03:22.228520 | orchestrator | TASK [Restart ceph manager service] ******************************************** 2025-05-29 01:03:22.228528 | orchestrator | Thursday 29 May 2025 01:02:47 +0000 (0:00:01.851) 0:00:31.433 ********** 2025-05-29 01:03:22.228535 | orchestrator | changed: [testbed-node-2] 2025-05-29 01:03:22.228543 | orchestrator | 2025-05-29 01:03:22.228550 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-29 01:03:22.228558 | orchestrator | testbed-manager : ok=9  changed=9  unreachable=0 failed=0 skipped=1  rescued=0 ignored=0 2025-05-29 01:03:22.228567 | orchestrator | testbed-node-0 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-29 01:03:22.228574 | orchestrator | testbed-node-1 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-29 01:03:22.228581 | orchestrator | testbed-node-2 : ok=1  changed=1  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-29 01:03:22.228588 | orchestrator | 2025-05-29 01:03:22.228596 | orchestrator | 2025-05-29 01:03:22.228603 | orchestrator | 2025-05-29 01:03:22.228610 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-29 01:03:22.228619 | orchestrator | Thursday 29 May 2025 01:02:48 +0000 (0:00:01.333) 0:00:32.767 ********** 2025-05-29 01:03:22.228627 | orchestrator | =============================================================================== 2025-05-29 01:03:22.228644 | orchestrator | Create admin user ------------------------------------------------------ 18.09s 2025-05-29 01:03:22.228664 | orchestrator | Restart ceph manager service -------------------------------------------- 5.29s 2025-05-29 01:03:22.228674 | orchestrator | Disable the ceph dashboard ---------------------------------------------- 1.85s 2025-05-29 01:03:22.228682 | orchestrator | Write ceph_dashboard_password to temporary file ------------------------- 1.14s 2025-05-29 01:03:22.228690 | orchestrator | Enable the ceph dashboard ----------------------------------------------- 1.09s 2025-05-29 01:03:22.228698 | orchestrator | Set mgr/dashboard/standby_error_status_code to 404 ---------------------- 1.01s 2025-05-29 01:03:22.228706 | orchestrator | Set mgr/dashboard/ssl to false ------------------------------------------ 0.85s 2025-05-29 01:03:22.228715 | orchestrator | Set mgr/dashboard/server_port to 7000 ----------------------------------- 0.82s 2025-05-29 01:03:22.228724 | orchestrator | Set mgr/dashboard/server_addr to 0.0.0.0 -------------------------------- 0.82s 2025-05-29 01:03:22.228733 | orchestrator | Set mgr/dashboard/standby_behaviour to error ---------------------------- 0.81s 2025-05-29 01:03:22.228741 | orchestrator | Remove temporary file for ceph_dashboard_password ----------------------- 0.61s 2025-05-29 01:03:22.228756 | orchestrator | 2025-05-29 01:03:22.228763 | orchestrator | 2025-05-29 01:03:22.228770 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-29 01:03:22.228777 | orchestrator | 2025-05-29 01:03:22.228783 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-29 01:03:22.228790 | orchestrator | Thursday 29 May 2025 01:01:52 +0000 (0:00:00.390) 0:00:00.390 ********** 2025-05-29 01:03:22.228798 | orchestrator | ok: [testbed-node-0] 2025-05-29 01:03:22.228806 | orchestrator | ok: [testbed-node-1] 2025-05-29 01:03:22.228814 | orchestrator | ok: [testbed-node-2] 2025-05-29 01:03:22.228821 | orchestrator | 2025-05-29 01:03:22.228828 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-29 01:03:22.228835 | orchestrator | Thursday 29 May 2025 01:01:52 +0000 (0:00:00.421) 0:00:00.812 ********** 2025-05-29 01:03:22.228843 | orchestrator | ok: [testbed-node-0] => (item=enable_placement_True) 2025-05-29 01:03:22.228850 | orchestrator | ok: [testbed-node-1] => (item=enable_placement_True) 2025-05-29 01:03:22.228856 | orchestrator | ok: [testbed-node-2] => (item=enable_placement_True) 2025-05-29 01:03:22.228863 | orchestrator | 2025-05-29 01:03:22.228870 | orchestrator | PLAY [Apply role placement] **************************************************** 2025-05-29 01:03:22.228878 | orchestrator | 2025-05-29 01:03:22.228885 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-05-29 01:03:22.228892 | orchestrator | Thursday 29 May 2025 01:01:52 +0000 (0:00:00.365) 0:00:01.177 ********** 2025-05-29 01:03:22.228900 | orchestrator | included: /ansible/roles/placement/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-29 01:03:22.228908 | orchestrator | 2025-05-29 01:03:22.228915 | orchestrator | TASK [service-ks-register : placement | Creating services] ********************* 2025-05-29 01:03:22.228922 | orchestrator | Thursday 29 May 2025 01:01:54 +0000 (0:00:01.652) 0:00:02.830 ********** 2025-05-29 01:03:22.228930 | orchestrator | changed: [testbed-node-0] => (item=placement (placement)) 2025-05-29 01:03:22.228936 | orchestrator | 2025-05-29 01:03:22.228943 | orchestrator | TASK [service-ks-register : placement | Creating endpoints] ******************** 2025-05-29 01:03:22.228949 | orchestrator | Thursday 29 May 2025 01:01:58 +0000 (0:00:03.693) 0:00:06.524 ********** 2025-05-29 01:03:22.228956 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api-int.testbed.osism.xyz:8780 -> internal) 2025-05-29 01:03:22.228963 | orchestrator | changed: [testbed-node-0] => (item=placement -> https://api.testbed.osism.xyz:8780 -> public) 2025-05-29 01:03:22.228970 | orchestrator | 2025-05-29 01:03:22.228976 | orchestrator | TASK [service-ks-register : placement | Creating projects] ********************* 2025-05-29 01:03:22.229004 | orchestrator | Thursday 29 May 2025 01:02:04 +0000 (0:00:06.444) 0:00:12.968 ********** 2025-05-29 01:03:22.229012 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-05-29 01:03:22.229019 | orchestrator | 2025-05-29 01:03:22.229025 | orchestrator | TASK [service-ks-register : placement | Creating users] ************************ 2025-05-29 01:03:22.229032 | orchestrator | Thursday 29 May 2025 01:02:08 +0000 (0:00:04.106) 0:00:17.075 ********** 2025-05-29 01:03:22.229039 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-29 01:03:22.229046 | orchestrator | changed: [testbed-node-0] => (item=placement -> service) 2025-05-29 01:03:22.229053 | orchestrator | 2025-05-29 01:03:22.229060 | orchestrator | TASK [service-ks-register : placement | Creating roles] ************************ 2025-05-29 01:03:22.229066 | orchestrator | Thursday 29 May 2025 01:02:12 +0000 (0:00:04.076) 0:00:21.152 ********** 2025-05-29 01:03:22.229073 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-05-29 01:03:22.229079 | orchestrator | 2025-05-29 01:03:22.229085 | orchestrator | TASK [service-ks-register : placement | Granting user roles] ******************* 2025-05-29 01:03:22.229091 | orchestrator | Thursday 29 May 2025 01:02:16 +0000 (0:00:03.330) 0:00:24.482 ********** 2025-05-29 01:03:22.229097 | orchestrator | changed: [testbed-node-0] => (item=placement -> service -> admin) 2025-05-29 01:03:22.229108 | orchestrator | 2025-05-29 01:03:22.229114 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-05-29 01:03:22.229119 | orchestrator | Thursday 29 May 2025 01:02:21 +0000 (0:00:05.110) 0:00:29.593 ********** 2025-05-29 01:03:22.229125 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:03:22.229131 | orchestrator | skipping: [testbed-node-1] 2025-05-29 01:03:22.229136 | orchestrator | skipping: [testbed-node-2] 2025-05-29 01:03:22.229142 | orchestrator | 2025-05-29 01:03:22.229148 | orchestrator | TASK [placement : Ensuring config directories exist] *************************** 2025-05-29 01:03:22.229155 | orchestrator | Thursday 29 May 2025 01:02:23 +0000 (0:00:01.756) 0:00:31.350 ********** 2025-05-29 01:03:22.229179 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-29 01:03:22.229189 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-29 01:03:22.229196 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-29 01:03:22.229203 | orchestrator | 2025-05-29 01:03:22.229209 | orchestrator | TASK [placement : Check if policies shall be overwritten] ********************** 2025-05-29 01:03:22.229216 | orchestrator | Thursday 29 May 2025 01:02:25 +0000 (0:00:02.129) 0:00:33.480 ********** 2025-05-29 01:03:22.229223 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:03:22.229229 | orchestrator | 2025-05-29 01:03:22.229236 | orchestrator | TASK [placement : Set placement policy file] *********************************** 2025-05-29 01:03:22.229242 | orchestrator | Thursday 29 May 2025 01:02:25 +0000 (0:00:00.139) 0:00:33.620 ********** 2025-05-29 01:03:22.229253 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:03:22.229260 | orchestrator | skipping: [testbed-node-1] 2025-05-29 01:03:22.229267 | orchestrator | skipping: [testbed-node-2] 2025-05-29 01:03:22.229273 | orchestrator | 2025-05-29 01:03:22.229280 | orchestrator | TASK [placement : include_tasks] *********************************************** 2025-05-29 01:03:22.229286 | orchestrator | Thursday 29 May 2025 01:02:25 +0000 (0:00:00.277) 0:00:33.898 ********** 2025-05-29 01:03:22.229293 | orchestrator | included: /ansible/roles/placement/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-29 01:03:22.229300 | orchestrator | 2025-05-29 01:03:22.229306 | orchestrator | TASK [service-cert-copy : placement | Copying over extra CA certificates] ****** 2025-05-29 01:03:22.229313 | orchestrator | Thursday 29 May 2025 01:02:26 +0000 (0:00:01.320) 0:00:35.218 ********** 2025-05-29 01:03:22.229329 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-29 01:03:22.229336 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-29 01:03:22.229342 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-29 01:03:22.229349 | orchestrator | 2025-05-29 01:03:22.229356 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS certificate] *** 2025-05-29 01:03:22.229362 | orchestrator | Thursday 29 May 2025 01:02:29 +0000 (0:00:02.620) 0:00:37.838 ********** 2025-05-29 01:03:22.229369 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-29 01:03:22.229381 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:03:22.229391 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-29 01:03:22.229402 | orchestrator | skipping: [testbed-node-1] 2025-05-29 01:03:22.229408 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-29 01:03:22.229415 | orchestrator | skipping: [testbed-node-2] 2025-05-29 01:03:22.229421 | orchestrator | 2025-05-29 01:03:22.229428 | orchestrator | TASK [service-cert-copy : placement | Copying over backend internal TLS key] *** 2025-05-29 01:03:22.229434 | orchestrator | Thursday 29 May 2025 01:02:30 +0000 (0:00:01.158) 0:00:38.997 ********** 2025-05-29 01:03:22.229441 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-29 01:03:22.229448 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:03:22.229458 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-29 01:03:22.229464 | orchestrator | skipping: [testbed-node-1] 2025-05-29 01:03:22.229471 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-29 01:03:22.229478 | orchestrator | skipping: [testbed-node-2] 2025-05-29 01:03:22.229484 | orchestrator | 2025-05-29 01:03:22.229497 | orchestrator | TASK [placement : Copying over config.json files for services] ***************** 2025-05-29 01:03:22.229503 | orchestrator | Thursday 29 May 2025 01:02:33 +0000 (0:00:02.461) 0:00:41.458 ********** 2025-05-29 01:03:22.229509 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-29 01:03:22.229516 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-29 01:03:22.229527 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-29 01:03:22.229534 | orchestrator | 2025-05-29 01:03:22.229541 | orchestrator | TASK [placement : Copying over placement.conf] ********************************* 2025-05-29 01:03:22.229547 | orchestrator | Thursday 29 May 2025 01:02:35 +0000 (0:00:01.976) 0:00:43.435 ********** 2025-05-29 01:03:22.229555 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-29 01:03:22.229569 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-29 01:03:22.229577 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-29 01:03:22.229592 | orchestrator | 2025-05-29 01:03:22.229599 | orchestrator | TASK [placement : Copying over placement-api wsgi configuration] *************** 2025-05-29 01:03:22.229606 | orchestrator | Thursday 29 May 2025 01:02:39 +0000 (0:00:03.874) 0:00:47.310 ********** 2025-05-29 01:03:22.229613 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-05-29 01:03:22.229619 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-05-29 01:03:22.229627 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/placement/templates/placement-api-wsgi.conf.j2) 2025-05-29 01:03:22.229634 | orchestrator | 2025-05-29 01:03:22.229640 | orchestrator | TASK [placement : Copying over migrate-db.rc.j2 configuration] ***************** 2025-05-29 01:03:22.229647 | orchestrator | Thursday 29 May 2025 01:02:42 +0000 (0:00:03.172) 0:00:50.482 ********** 2025-05-29 01:03:22.229654 | orchestrator | changed: [testbed-node-1] 2025-05-29 01:03:22.229661 | orchestrator | changed: [testbed-node-0] 2025-05-29 01:03:22.229668 | orchestrator | changed: [testbed-node-2] 2025-05-29 01:03:22.229675 | orchestrator | 2025-05-29 01:03:22.229682 | orchestrator | TASK [placement : Copying over existing policy file] *************************** 2025-05-29 01:03:22.229688 | orchestrator | Thursday 29 May 2025 01:02:44 +0000 (0:00:02.031) 0:00:52.514 ********** 2025-05-29 01:03:22.229696 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-29 01:03:22.229703 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:03:22.229718 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-29 01:03:22.229853 | orchestrator | skipping: [testbed-node-1] 2025-05-29 01:03:22.229864 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}})  2025-05-29 01:03:22.229881 | orchestrator | skipping: [testbed-node-2] 2025-05-29 01:03:22.229888 | orchestrator | 2025-05-29 01:03:22.229895 | orchestrator | TASK [placement : Check placement containers] ********************************** 2025-05-29 01:03:22.229902 | orchestrator | Thursday 29 May 2025 01:02:46 +0000 (0:00:02.081) 0:00:54.595 ********** 2025-05-29 01:03:22.229909 | orchestrator | changed: [testbed-node-0] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-29 01:03:22.229917 | orchestrator | changed: [testbed-node-1] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-29 01:03:22.229933 | orchestrator | changed: [testbed-node-2] => (item={'key': 'placement-api', 'value': {'container_name': 'placement_api', 'group': 'placement-api', 'image': 'registry.osism.tech/kolla/release/placement-api:11.0.0.20241206', 'enabled': True, 'volumes': ['/etc/kolla/placement-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8780'], 'timeout': '30'}, 'haproxy': {'placement_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}, 'placement_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8780', 'listen_port': '8780', 'tls_backend': 'no'}}}}) 2025-05-29 01:03:22.229940 | orchestrator | 2025-05-29 01:03:22.229947 | orchestrator | TASK [placement : Creating placement databases] ******************************** 2025-05-29 01:03:22.229953 | orchestrator | Thursday 29 May 2025 01:02:47 +0000 (0:00:01.674) 0:00:56.269 ********** 2025-05-29 01:03:22.229960 | orchestrator | changed: [testbed-node-0] 2025-05-29 01:03:22.229967 | orchestrator | 2025-05-29 01:03:22.229973 | orchestrator | TASK [placement : Creating placement databases user and setting permissions] *** 2025-05-29 01:03:22.229980 | orchestrator | Thursday 29 May 2025 01:02:50 +0000 (0:00:02.895) 0:00:59.165 ********** 2025-05-29 01:03:22.230002 | orchestrator | changed: [testbed-node-0] 2025-05-29 01:03:22.230043 | orchestrator | 2025-05-29 01:03:22.230052 | orchestrator | TASK [placement : Running placement bootstrap container] *********************** 2025-05-29 01:03:22.230059 | orchestrator | Thursday 29 May 2025 01:02:53 +0000 (0:00:02.479) 0:01:01.645 ********** 2025-05-29 01:03:22.230065 | orchestrator | changed: [testbed-node-0] 2025-05-29 01:03:22.230071 | orchestrator | 2025-05-29 01:03:22.230078 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-05-29 01:03:22.230085 | orchestrator | Thursday 29 May 2025 01:03:05 +0000 (0:00:12.006) 0:01:13.651 ********** 2025-05-29 01:03:22.230092 | orchestrator | 2025-05-29 01:03:22.230099 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-05-29 01:03:22.230106 | orchestrator | Thursday 29 May 2025 01:03:05 +0000 (0:00:00.131) 0:01:13.783 ********** 2025-05-29 01:03:22.230113 | orchestrator | 2025-05-29 01:03:22.230120 | orchestrator | TASK [placement : Flush handlers] ********************************************** 2025-05-29 01:03:22.230127 | orchestrator | Thursday 29 May 2025 01:03:06 +0000 (0:00:00.619) 0:01:14.403 ********** 2025-05-29 01:03:22.230134 | orchestrator | 2025-05-29 01:03:22.230141 | orchestrator | RUNNING HANDLER [placement : Restart placement-api container] ****************** 2025-05-29 01:03:22.230148 | orchestrator | Thursday 29 May 2025 01:03:06 +0000 (0:00:00.203) 0:01:14.607 ********** 2025-05-29 01:03:22.230155 | orchestrator | changed: [testbed-node-0] 2025-05-29 01:03:22.230162 | orchestrator | changed: [testbed-node-1] 2025-05-29 01:03:22.230169 | orchestrator | changed: [testbed-node-2] 2025-05-29 01:03:22.230177 | orchestrator | 2025-05-29 01:03:22.230184 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-29 01:03:22.230191 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-29 01:03:22.230200 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-29 01:03:22.230206 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=5  rescued=0 ignored=0 2025-05-29 01:03:22.230214 | orchestrator | 2025-05-29 01:03:22.230220 | orchestrator | 2025-05-29 01:03:22.230227 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-29 01:03:22.230234 | orchestrator | Thursday 29 May 2025 01:03:18 +0000 (0:00:12.346) 0:01:26.953 ********** 2025-05-29 01:03:22.230241 | orchestrator | =============================================================================== 2025-05-29 01:03:22.230248 | orchestrator | placement : Restart placement-api container ---------------------------- 12.35s 2025-05-29 01:03:22.230255 | orchestrator | placement : Running placement bootstrap container ---------------------- 12.01s 2025-05-29 01:03:22.230262 | orchestrator | service-ks-register : placement | Creating endpoints -------------------- 6.44s 2025-05-29 01:03:22.230269 | orchestrator | service-ks-register : placement | Granting user roles ------------------- 5.11s 2025-05-29 01:03:22.230276 | orchestrator | service-ks-register : placement | Creating projects --------------------- 4.11s 2025-05-29 01:03:22.230283 | orchestrator | service-ks-register : placement | Creating users ------------------------ 4.08s 2025-05-29 01:03:22.230290 | orchestrator | placement : Copying over placement.conf --------------------------------- 3.87s 2025-05-29 01:03:22.230297 | orchestrator | service-ks-register : placement | Creating services --------------------- 3.69s 2025-05-29 01:03:22.230304 | orchestrator | service-ks-register : placement | Creating roles ------------------------ 3.33s 2025-05-29 01:03:22.230311 | orchestrator | placement : Copying over placement-api wsgi configuration --------------- 3.17s 2025-05-29 01:03:22.230318 | orchestrator | placement : Creating placement databases -------------------------------- 2.90s 2025-05-29 01:03:22.230324 | orchestrator | service-cert-copy : placement | Copying over extra CA certificates ------ 2.62s 2025-05-29 01:03:22.230330 | orchestrator | placement : Creating placement databases user and setting permissions --- 2.48s 2025-05-29 01:03:22.230336 | orchestrator | service-cert-copy : placement | Copying over backend internal TLS key --- 2.46s 2025-05-29 01:03:22.230348 | orchestrator | placement : Ensuring config directories exist --------------------------- 2.13s 2025-05-29 01:03:22.230355 | orchestrator | placement : Copying over existing policy file --------------------------- 2.08s 2025-05-29 01:03:22.230361 | orchestrator | placement : Copying over migrate-db.rc.j2 configuration ----------------- 2.03s 2025-05-29 01:03:22.230369 | orchestrator | placement : Copying over config.json files for services ----------------- 1.98s 2025-05-29 01:03:22.230376 | orchestrator | placement : include_tasks ----------------------------------------------- 1.76s 2025-05-29 01:03:22.230383 | orchestrator | placement : Check placement containers ---------------------------------- 1.67s 2025-05-29 01:03:22.230394 | orchestrator | 2025-05-29 01:03:22 | INFO  | Task 714be9b6-46c8-4528-8b44-60523f1aad38 is in state STARTED 2025-05-29 01:03:22.230405 | orchestrator | 2025-05-29 01:03:22 | INFO  | Task 6186519a-80c7-4062-9cf9-1e1244758c56 is in state STARTED 2025-05-29 01:03:22.230413 | orchestrator | 2025-05-29 01:03:22 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:03:22.230420 | orchestrator | 2025-05-29 01:03:22 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:03:25.276343 | orchestrator | 2025-05-29 01:03:25 | INFO  | Task e577f5df-e622-48fb-812d-b74ae9042bd6 is in state STARTED 2025-05-29 01:03:25.276477 | orchestrator | 2025-05-29 01:03:25 | INFO  | Task c5a4286f-cc55-400d-a696-dbd74f6afb4d is in state STARTED 2025-05-29 01:03:25.277149 | orchestrator | 2025-05-29 01:03:25 | INFO  | Task 714be9b6-46c8-4528-8b44-60523f1aad38 is in state STARTED 2025-05-29 01:03:25.278308 | orchestrator | 2025-05-29 01:03:25 | INFO  | Task 6186519a-80c7-4062-9cf9-1e1244758c56 is in state STARTED 2025-05-29 01:03:25.279180 | orchestrator | 2025-05-29 01:03:25 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:03:25.279253 | orchestrator | 2025-05-29 01:03:25 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:03:28.317797 | orchestrator | 2025-05-29 01:03:28 | INFO  | Task e577f5df-e622-48fb-812d-b74ae9042bd6 is in state STARTED 2025-05-29 01:03:28.318211 | orchestrator | 2025-05-29 01:03:28 | INFO  | Task c5a4286f-cc55-400d-a696-dbd74f6afb4d is in state STARTED 2025-05-29 01:03:28.318854 | orchestrator | 2025-05-29 01:03:28 | INFO  | Task a9604cbf-f7a3-4150-922e-75d187abd0c2 is in state STARTED 2025-05-29 01:03:28.319455 | orchestrator | 2025-05-29 01:03:28 | INFO  | Task 714be9b6-46c8-4528-8b44-60523f1aad38 is in state STARTED 2025-05-29 01:03:28.320589 | orchestrator | 2025-05-29 01:03:28.320619 | orchestrator | 2025-05-29 01:03:28 | INFO  | Task 6186519a-80c7-4062-9cf9-1e1244758c56 is in state SUCCESS 2025-05-29 01:03:28.322324 | orchestrator | 2025-05-29 01:03:28.322380 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-29 01:03:28.322394 | orchestrator | 2025-05-29 01:03:28.322406 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-29 01:03:28.322418 | orchestrator | Thursday 29 May 2025 01:01:15 +0000 (0:00:00.521) 0:00:00.521 ********** 2025-05-29 01:03:28.322429 | orchestrator | ok: [testbed-node-0] 2025-05-29 01:03:28.322441 | orchestrator | ok: [testbed-node-1] 2025-05-29 01:03:28.322452 | orchestrator | ok: [testbed-node-2] 2025-05-29 01:03:28.322462 | orchestrator | 2025-05-29 01:03:28.322474 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-29 01:03:28.322485 | orchestrator | Thursday 29 May 2025 01:01:16 +0000 (0:00:00.632) 0:00:01.154 ********** 2025-05-29 01:03:28.322496 | orchestrator | ok: [testbed-node-0] => (item=enable_barbican_True) 2025-05-29 01:03:28.322508 | orchestrator | ok: [testbed-node-1] => (item=enable_barbican_True) 2025-05-29 01:03:28.322518 | orchestrator | ok: [testbed-node-2] => (item=enable_barbican_True) 2025-05-29 01:03:28.322529 | orchestrator | 2025-05-29 01:03:28.322540 | orchestrator | PLAY [Apply role barbican] ***************************************************** 2025-05-29 01:03:28.322576 | orchestrator | 2025-05-29 01:03:28.322596 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-05-29 01:03:28.322615 | orchestrator | Thursday 29 May 2025 01:01:16 +0000 (0:00:00.337) 0:00:01.492 ********** 2025-05-29 01:03:28.322632 | orchestrator | included: /ansible/roles/barbican/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-29 01:03:28.322652 | orchestrator | 2025-05-29 01:03:28.322670 | orchestrator | TASK [service-ks-register : barbican | Creating services] ********************** 2025-05-29 01:03:28.322686 | orchestrator | Thursday 29 May 2025 01:01:17 +0000 (0:00:00.595) 0:00:02.088 ********** 2025-05-29 01:03:28.322703 | orchestrator | changed: [testbed-node-0] => (item=barbican (key-manager)) 2025-05-29 01:03:28.322721 | orchestrator | 2025-05-29 01:03:28.322740 | orchestrator | TASK [service-ks-register : barbican | Creating endpoints] ********************* 2025-05-29 01:03:28.322759 | orchestrator | Thursday 29 May 2025 01:01:20 +0000 (0:00:03.510) 0:00:05.598 ********** 2025-05-29 01:03:28.322777 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api-int.testbed.osism.xyz:9311 -> internal) 2025-05-29 01:03:28.322792 | orchestrator | changed: [testbed-node-0] => (item=barbican -> https://api.testbed.osism.xyz:9311 -> public) 2025-05-29 01:03:28.322803 | orchestrator | 2025-05-29 01:03:28.322814 | orchestrator | TASK [service-ks-register : barbican | Creating projects] ********************** 2025-05-29 01:03:28.322825 | orchestrator | Thursday 29 May 2025 01:01:27 +0000 (0:00:06.884) 0:00:12.483 ********** 2025-05-29 01:03:28.322836 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-05-29 01:03:28.322848 | orchestrator | 2025-05-29 01:03:28.322859 | orchestrator | TASK [service-ks-register : barbican | Creating users] ************************* 2025-05-29 01:03:28.323345 | orchestrator | Thursday 29 May 2025 01:01:31 +0000 (0:00:03.390) 0:00:15.873 ********** 2025-05-29 01:03:28.323364 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-29 01:03:28.323375 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service) 2025-05-29 01:03:28.323386 | orchestrator | 2025-05-29 01:03:28.323397 | orchestrator | TASK [service-ks-register : barbican | Creating roles] ************************* 2025-05-29 01:03:28.323407 | orchestrator | Thursday 29 May 2025 01:01:35 +0000 (0:00:03.754) 0:00:19.628 ********** 2025-05-29 01:03:28.323418 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-05-29 01:03:28.323445 | orchestrator | changed: [testbed-node-0] => (item=key-manager:service-admin) 2025-05-29 01:03:28.323456 | orchestrator | changed: [testbed-node-0] => (item=creator) 2025-05-29 01:03:28.323467 | orchestrator | changed: [testbed-node-0] => (item=observer) 2025-05-29 01:03:28.323478 | orchestrator | changed: [testbed-node-0] => (item=audit) 2025-05-29 01:03:28.323489 | orchestrator | 2025-05-29 01:03:28.323500 | orchestrator | TASK [service-ks-register : barbican | Granting user roles] ******************** 2025-05-29 01:03:28.323511 | orchestrator | Thursday 29 May 2025 01:01:50 +0000 (0:00:15.312) 0:00:34.940 ********** 2025-05-29 01:03:28.323522 | orchestrator | changed: [testbed-node-0] => (item=barbican -> service -> admin) 2025-05-29 01:03:28.323533 | orchestrator | 2025-05-29 01:03:28.323544 | orchestrator | TASK [barbican : Ensuring config directories exist] **************************** 2025-05-29 01:03:28.323554 | orchestrator | Thursday 29 May 2025 01:01:55 +0000 (0:00:04.952) 0:00:39.892 ********** 2025-05-29 01:03:28.323568 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-29 01:03:28.323613 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-29 01:03:28.323627 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-29 01:03:28.323646 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-29 01:03:28.323659 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-29 01:03:28.323671 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-29 01:03:28.323698 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-29 01:03:28.323712 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-29 01:03:28.323723 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-29 01:03:28.323735 | orchestrator | 2025-05-29 01:03:28.323746 | orchestrator | TASK [barbican : Ensuring vassals config directories exist] ******************** 2025-05-29 01:03:28.323758 | orchestrator | Thursday 29 May 2025 01:01:57 +0000 (0:00:02.437) 0:00:42.330 ********** 2025-05-29 01:03:28.323769 | orchestrator | changed: [testbed-node-1] => (item=barbican-api/vassals) 2025-05-29 01:03:28.323781 | orchestrator | changed: [testbed-node-0] => (item=barbican-api/vassals) 2025-05-29 01:03:28.323792 | orchestrator | changed: [testbed-node-2] => (item=barbican-api/vassals) 2025-05-29 01:03:28.323803 | orchestrator | 2025-05-29 01:03:28.323814 | orchestrator | TASK [barbican : Check if policies shall be overwritten] *********************** 2025-05-29 01:03:28.323825 | orchestrator | Thursday 29 May 2025 01:01:59 +0000 (0:00:02.194) 0:00:44.524 ********** 2025-05-29 01:03:28.323835 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:03:28.323847 | orchestrator | 2025-05-29 01:03:28.323858 | orchestrator | TASK [barbican : Set barbican policy file] ************************************* 2025-05-29 01:03:28.323873 | orchestrator | Thursday 29 May 2025 01:02:00 +0000 (0:00:00.117) 0:00:44.642 ********** 2025-05-29 01:03:28.323885 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:03:28.323896 | orchestrator | skipping: [testbed-node-1] 2025-05-29 01:03:28.323907 | orchestrator | skipping: [testbed-node-2] 2025-05-29 01:03:28.323917 | orchestrator | 2025-05-29 01:03:28.323929 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-05-29 01:03:28.323939 | orchestrator | Thursday 29 May 2025 01:02:00 +0000 (0:00:00.462) 0:00:45.104 ********** 2025-05-29 01:03:28.323953 | orchestrator | included: /ansible/roles/barbican/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-29 01:03:28.323967 | orchestrator | 2025-05-29 01:03:28.324030 | orchestrator | TASK [service-cert-copy : barbican | Copying over extra CA certificates] ******* 2025-05-29 01:03:28.324046 | orchestrator | Thursday 29 May 2025 01:02:01 +0000 (0:00:01.335) 0:00:46.439 ********** 2025-05-29 01:03:28.324060 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-29 01:03:28.324083 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-29 01:03:28.324098 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-29 01:03:28.324118 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-29 01:03:28.324132 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-29 01:03:28.324153 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-29 01:03:28.324173 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-29 01:03:28.324187 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-29 01:03:28.324202 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-29 01:03:28.324214 | orchestrator | 2025-05-29 01:03:28.324228 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS certificate] *** 2025-05-29 01:03:28.324240 | orchestrator | Thursday 29 May 2025 01:02:06 +0000 (0:00:04.194) 0:00:50.633 ********** 2025-05-29 01:03:28.324259 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-29 01:03:28.324280 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-29 01:03:28.324301 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-29 01:03:28.324314 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:03:28.324327 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-29 01:03:28.324339 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-29 01:03:28.324355 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-29 01:03:28.324373 | orchestrator | skipping: [testbed-node-1] 2025-05-29 01:03:28.324384 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-29 01:03:28.324403 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-29 01:03:28.324415 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-29 01:03:28.324427 | orchestrator | skipping: [testbed-node-2] 2025-05-29 01:03:28.324438 | orchestrator | 2025-05-29 01:03:28.324449 | orchestrator | TASK [service-cert-copy : barbican | Copying over backend internal TLS key] **** 2025-05-29 01:03:28.324460 | orchestrator | Thursday 29 May 2025 01:02:08 +0000 (0:00:02.271) 0:00:52.905 ********** 2025-05-29 01:03:28.324472 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-29 01:03:28.324488 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-29 01:03:28.324512 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-29 01:03:28.324524 | orchestrator | skipping: [testbed-node-1] 2025-05-29 01:03:28.324542 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-29 01:03:28.324554 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-29 01:03:28.324566 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-29 01:03:28.324578 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:03:28.324594 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-29 01:03:28.324613 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-29 01:03:28.324624 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-29 01:03:28.324635 | orchestrator | skipping: [testbed-node-2] 2025-05-29 01:03:28.324646 | orchestrator | 2025-05-29 01:03:28.324657 | orchestrator | TASK [barbican : Copying over config.json files for services] ****************** 2025-05-29 01:03:28.324674 | orchestrator | Thursday 29 May 2025 01:02:10 +0000 (0:00:02.227) 0:00:55.132 ********** 2025-05-29 01:03:28.324685 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-29 01:03:28.324698 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-29 01:03:28.324720 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-29 01:03:28.324733 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-29 01:03:28.324751 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-29 01:03:28.324764 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-29 01:03:28.324775 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-29 01:03:28.324792 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-29 01:03:28.324808 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-29 01:03:28.324819 | orchestrator | 2025-05-29 01:03:28.324831 | orchestrator | TASK [barbican : Copying over barbican-api.ini] ******************************** 2025-05-29 01:03:28.324842 | orchestrator | Thursday 29 May 2025 01:02:14 +0000 (0:00:04.134) 0:00:59.267 ********** 2025-05-29 01:03:28.324853 | orchestrator | changed: [testbed-node-1] 2025-05-29 01:03:28.324864 | orchestrator | changed: [testbed-node-0] 2025-05-29 01:03:28.324874 | orchestrator | changed: [testbed-node-2] 2025-05-29 01:03:28.324885 | orchestrator | 2025-05-29 01:03:28.324896 | orchestrator | TASK [barbican : Checking whether barbican-api-paste.ini file exists] ********** 2025-05-29 01:03:28.324907 | orchestrator | Thursday 29 May 2025 01:02:17 +0000 (0:00:03.250) 0:01:02.517 ********** 2025-05-29 01:03:28.324918 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-29 01:03:28.324929 | orchestrator | 2025-05-29 01:03:28.324940 | orchestrator | TASK [barbican : Copying over barbican-api-paste.ini] ************************** 2025-05-29 01:03:28.324951 | orchestrator | Thursday 29 May 2025 01:02:19 +0000 (0:00:01.245) 0:01:03.763 ********** 2025-05-29 01:03:28.324962 | orchestrator | skipping: [testbed-node-1] 2025-05-29 01:03:28.324973 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:03:28.325070 | orchestrator | skipping: [testbed-node-2] 2025-05-29 01:03:28.325082 | orchestrator | 2025-05-29 01:03:28.325093 | orchestrator | TASK [barbican : Copying over barbican.conf] *********************************** 2025-05-29 01:03:28.325104 | orchestrator | Thursday 29 May 2025 01:02:20 +0000 (0:00:01.583) 0:01:05.347 ********** 2025-05-29 01:03:28.325130 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-29 01:03:28.325151 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-29 01:03:28.325188 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-29 01:03:28.325209 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-29 01:03:28.325227 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-29 01:03:28.325255 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-29 01:03:28.325282 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-29 01:03:28.325300 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-29 01:03:28.325323 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-29 01:03:28.325342 | orchestrator | 2025-05-29 01:03:28.325358 | orchestrator | TASK [barbican : Copying over existing policy file] **************************** 2025-05-29 01:03:28.325388 | orchestrator | Thursday 29 May 2025 01:02:33 +0000 (0:00:12.369) 0:01:17.716 ********** 2025-05-29 01:03:28.325400 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-29 01:03:28.325417 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-29 01:03:28.325428 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-29 01:03:28.325447 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:03:28.325457 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-29 01:03:28.325472 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-29 01:03:28.325483 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-29 01:03:28.325492 | orchestrator | skipping: [testbed-node-1] 2025-05-29 01:03:28.325510 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}})  2025-05-29 01:03:28.325527 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}})  2025-05-29 01:03:28.325537 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}})  2025-05-29 01:03:28.325547 | orchestrator | skipping: [testbed-node-2] 2025-05-29 01:03:28.325557 | orchestrator | 2025-05-29 01:03:28.325567 | orchestrator | TASK [barbican : Check barbican containers] ************************************ 2025-05-29 01:03:28.325577 | orchestrator | Thursday 29 May 2025 01:02:34 +0000 (0:00:01.276) 0:01:18.993 ********** 2025-05-29 01:03:28.325591 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-29 01:03:28.325602 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-29 01:03:28.325619 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-api', 'value': {'container_name': 'barbican_api', 'group': 'barbican-api', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-api:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'barbican:/var/lib/barbican/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9311'], 'timeout': '30'}, 'haproxy': {'barbican_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}, 'barbican_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9311', 'listen_port': '9311', 'tls_backend': 'no'}}}}) 2025-05-29 01:03:28.325641 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-29 01:03:28.325652 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-29 01:03:28.325666 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-keystone-listener', 'value': {'container_name': 'barbican_keystone_listener', 'group': 'barbican-keystone-listener', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-keystone-listener:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-keystone-listener/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-keystone-listener 5672'], 'timeout': '30'}}}) 2025-05-29 01:03:28.325677 | orchestrator | changed: [testbed-node-2] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-29 01:03:28.325687 | orchestrator | changed: [testbed-node-1] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-29 01:03:28.325711 | orchestrator | changed: [testbed-node-0] => (item={'key': 'barbican-worker', 'value': {'container_name': 'barbican_worker', 'group': 'barbican-worker', 'enabled': True, 'environment': {'CS_AUTH_KEYS': ''}, 'image': 'registry.osism.tech/kolla/release/barbican-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/barbican-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port barbican-worker 5672'], 'timeout': '30'}}}) 2025-05-29 01:03:28.325722 | orchestrator | 2025-05-29 01:03:28.325732 | orchestrator | TASK [barbican : include_tasks] ************************************************ 2025-05-29 01:03:28.325742 | orchestrator | Thursday 29 May 2025 01:02:37 +0000 (0:00:03.315) 0:01:22.308 ********** 2025-05-29 01:03:28.325752 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:03:28.325762 | orchestrator | skipping: [testbed-node-1] 2025-05-29 01:03:28.325771 | orchestrator | skipping: [testbed-node-2] 2025-05-29 01:03:28.325781 | orchestrator | 2025-05-29 01:03:28.325791 | orchestrator | TASK [barbican : Creating barbican database] *********************************** 2025-05-29 01:03:28.325801 | orchestrator | Thursday 29 May 2025 01:02:38 +0000 (0:00:00.629) 0:01:22.937 ********** 2025-05-29 01:03:28.325810 | orchestrator | changed: [testbed-node-0] 2025-05-29 01:03:28.325820 | orchestrator | 2025-05-29 01:03:28.325936 | orchestrator | TASK [barbican : Creating barbican database user and setting permissions] ****** 2025-05-29 01:03:28.325947 | orchestrator | Thursday 29 May 2025 01:02:41 +0000 (0:00:02.909) 0:01:25.847 ********** 2025-05-29 01:03:28.325957 | orchestrator | changed: [testbed-node-0] 2025-05-29 01:03:28.325967 | orchestrator | 2025-05-29 01:03:28.325994 | orchestrator | TASK [barbican : Running barbican bootstrap container] ************************* 2025-05-29 01:03:28.326005 | orchestrator | Thursday 29 May 2025 01:02:43 +0000 (0:00:02.492) 0:01:28.340 ********** 2025-05-29 01:03:28.326045 | orchestrator | changed: [testbed-node-0] 2025-05-29 01:03:28.326057 | orchestrator | 2025-05-29 01:03:28.326067 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-05-29 01:03:28.326109 | orchestrator | Thursday 29 May 2025 01:02:55 +0000 (0:00:12.082) 0:01:40.422 ********** 2025-05-29 01:03:28.326121 | orchestrator | 2025-05-29 01:03:28.326137 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-05-29 01:03:28.326152 | orchestrator | Thursday 29 May 2025 01:02:55 +0000 (0:00:00.064) 0:01:40.487 ********** 2025-05-29 01:03:28.326162 | orchestrator | 2025-05-29 01:03:28.326172 | orchestrator | TASK [barbican : Flush handlers] *********************************************** 2025-05-29 01:03:28.326189 | orchestrator | Thursday 29 May 2025 01:02:56 +0000 (0:00:00.168) 0:01:40.655 ********** 2025-05-29 01:03:28.326203 | orchestrator | 2025-05-29 01:03:28.326220 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-api container] ******************** 2025-05-29 01:03:28.326237 | orchestrator | Thursday 29 May 2025 01:02:56 +0000 (0:00:00.069) 0:01:40.724 ********** 2025-05-29 01:03:28.326247 | orchestrator | changed: [testbed-node-2] 2025-05-29 01:03:28.326257 | orchestrator | changed: [testbed-node-1] 2025-05-29 01:03:28.326267 | orchestrator | changed: [testbed-node-0] 2025-05-29 01:03:28.326277 | orchestrator | 2025-05-29 01:03:28.326286 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-keystone-listener container] ****** 2025-05-29 01:03:28.326296 | orchestrator | Thursday 29 May 2025 01:03:05 +0000 (0:00:09.125) 0:01:49.849 ********** 2025-05-29 01:03:28.326305 | orchestrator | changed: [testbed-node-0] 2025-05-29 01:03:28.326315 | orchestrator | changed: [testbed-node-1] 2025-05-29 01:03:28.326325 | orchestrator | changed: [testbed-node-2] 2025-05-29 01:03:28.326334 | orchestrator | 2025-05-29 01:03:28.326350 | orchestrator | RUNNING HANDLER [barbican : Restart barbican-worker container] ***************** 2025-05-29 01:03:28.326360 | orchestrator | Thursday 29 May 2025 01:03:17 +0000 (0:00:12.573) 0:02:02.423 ********** 2025-05-29 01:03:28.326370 | orchestrator | changed: [testbed-node-0] 2025-05-29 01:03:28.326388 | orchestrator | changed: [testbed-node-1] 2025-05-29 01:03:28.326397 | orchestrator | changed: [testbed-node-2] 2025-05-29 01:03:28.326407 | orchestrator | 2025-05-29 01:03:28.326417 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-29 01:03:28.326427 | orchestrator | testbed-node-0 : ok=24  changed=18  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-05-29 01:03:28.326438 | orchestrator | testbed-node-1 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-29 01:03:28.326448 | orchestrator | testbed-node-2 : ok=14  changed=10  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-29 01:03:28.326458 | orchestrator | 2025-05-29 01:03:28.326467 | orchestrator | 2025-05-29 01:03:28.326478 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-29 01:03:28.326487 | orchestrator | Thursday 29 May 2025 01:03:26 +0000 (0:00:08.672) 0:02:11.096 ********** 2025-05-29 01:03:28.326497 | orchestrator | =============================================================================== 2025-05-29 01:03:28.326506 | orchestrator | service-ks-register : barbican | Creating roles ------------------------ 15.31s 2025-05-29 01:03:28.326516 | orchestrator | barbican : Restart barbican-keystone-listener container ---------------- 12.57s 2025-05-29 01:03:28.326525 | orchestrator | barbican : Copying over barbican.conf ---------------------------------- 12.37s 2025-05-29 01:03:28.326535 | orchestrator | barbican : Running barbican bootstrap container ------------------------ 12.08s 2025-05-29 01:03:28.326545 | orchestrator | barbican : Restart barbican-api container ------------------------------- 9.13s 2025-05-29 01:03:28.326554 | orchestrator | barbican : Restart barbican-worker container ---------------------------- 8.67s 2025-05-29 01:03:28.326564 | orchestrator | service-ks-register : barbican | Creating endpoints --------------------- 6.88s 2025-05-29 01:03:28.326581 | orchestrator | service-ks-register : barbican | Granting user roles -------------------- 4.95s 2025-05-29 01:03:28.326591 | orchestrator | service-cert-copy : barbican | Copying over extra CA certificates ------- 4.19s 2025-05-29 01:03:28.326601 | orchestrator | barbican : Copying over config.json files for services ------------------ 4.13s 2025-05-29 01:03:28.326611 | orchestrator | service-ks-register : barbican | Creating users ------------------------- 3.75s 2025-05-29 01:03:28.326620 | orchestrator | service-ks-register : barbican | Creating services ---------------------- 3.51s 2025-05-29 01:03:28.326630 | orchestrator | service-ks-register : barbican | Creating projects ---------------------- 3.39s 2025-05-29 01:03:28.326639 | orchestrator | barbican : Check barbican containers ------------------------------------ 3.32s 2025-05-29 01:03:28.326649 | orchestrator | barbican : Copying over barbican-api.ini -------------------------------- 3.25s 2025-05-29 01:03:28.326659 | orchestrator | barbican : Creating barbican database ----------------------------------- 2.91s 2025-05-29 01:03:28.326668 | orchestrator | barbican : Creating barbican database user and setting permissions ------ 2.49s 2025-05-29 01:03:28.326678 | orchestrator | barbican : Ensuring config directories exist ---------------------------- 2.44s 2025-05-29 01:03:28.326687 | orchestrator | service-cert-copy : barbican | Copying over backend internal TLS certificate --- 2.27s 2025-05-29 01:03:28.326697 | orchestrator | service-cert-copy : barbican | Copying over backend internal TLS key ---- 2.23s 2025-05-29 01:03:28.326707 | orchestrator | 2025-05-29 01:03:28 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:03:28.326717 | orchestrator | 2025-05-29 01:03:28 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:03:31.363409 | orchestrator | 2025-05-29 01:03:31 | INFO  | Task e577f5df-e622-48fb-812d-b74ae9042bd6 is in state STARTED 2025-05-29 01:03:31.365631 | orchestrator | 2025-05-29 01:03:31 | INFO  | Task c5a4286f-cc55-400d-a696-dbd74f6afb4d is in state STARTED 2025-05-29 01:03:31.366446 | orchestrator | 2025-05-29 01:03:31 | INFO  | Task a9604cbf-f7a3-4150-922e-75d187abd0c2 is in state STARTED 2025-05-29 01:03:31.366956 | orchestrator | 2025-05-29 01:03:31 | INFO  | Task 714be9b6-46c8-4528-8b44-60523f1aad38 is in state STARTED 2025-05-29 01:03:31.367535 | orchestrator | 2025-05-29 01:03:31 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:03:31.367560 | orchestrator | 2025-05-29 01:03:31 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:03:34.438939 | orchestrator | 2025-05-29 01:03:34 | INFO  | Task e577f5df-e622-48fb-812d-b74ae9042bd6 is in state STARTED 2025-05-29 01:03:34.439208 | orchestrator | 2025-05-29 01:03:34 | INFO  | Task c5a4286f-cc55-400d-a696-dbd74f6afb4d is in state STARTED 2025-05-29 01:03:34.439224 | orchestrator | 2025-05-29 01:03:34 | INFO  | Task a9604cbf-f7a3-4150-922e-75d187abd0c2 is in state SUCCESS 2025-05-29 01:03:34.439233 | orchestrator | 2025-05-29 01:03:34 | INFO  | Task 714be9b6-46c8-4528-8b44-60523f1aad38 is in state STARTED 2025-05-29 01:03:34.439261 | orchestrator | 2025-05-29 01:03:34 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:03:34.439282 | orchestrator | 2025-05-29 01:03:34 | INFO  | Task 222c4d4a-bc6d-472e-867f-3f5b90e5bb15 is in state STARTED 2025-05-29 01:03:34.439291 | orchestrator | 2025-05-29 01:03:34 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:03:37.474808 | orchestrator | 2025-05-29 01:03:37 | INFO  | Task e577f5df-e622-48fb-812d-b74ae9042bd6 is in state STARTED 2025-05-29 01:03:37.474912 | orchestrator | 2025-05-29 01:03:37 | INFO  | Task c5a4286f-cc55-400d-a696-dbd74f6afb4d is in state STARTED 2025-05-29 01:03:37.474938 | orchestrator | 2025-05-29 01:03:37 | INFO  | Task 714be9b6-46c8-4528-8b44-60523f1aad38 is in state STARTED 2025-05-29 01:03:37.476856 | orchestrator | 2025-05-29 01:03:37 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:03:37.477313 | orchestrator | 2025-05-29 01:03:37 | INFO  | Task 222c4d4a-bc6d-472e-867f-3f5b90e5bb15 is in state STARTED 2025-05-29 01:03:37.477343 | orchestrator | 2025-05-29 01:03:37 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:03:40.510397 | orchestrator | 2025-05-29 01:03:40 | INFO  | Task e577f5df-e622-48fb-812d-b74ae9042bd6 is in state STARTED 2025-05-29 01:03:40.510508 | orchestrator | 2025-05-29 01:03:40 | INFO  | Task c5a4286f-cc55-400d-a696-dbd74f6afb4d is in state STARTED 2025-05-29 01:03:40.511474 | orchestrator | 2025-05-29 01:03:40 | INFO  | Task 714be9b6-46c8-4528-8b44-60523f1aad38 is in state STARTED 2025-05-29 01:03:40.511885 | orchestrator | 2025-05-29 01:03:40 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:03:40.512574 | orchestrator | 2025-05-29 01:03:40 | INFO  | Task 222c4d4a-bc6d-472e-867f-3f5b90e5bb15 is in state STARTED 2025-05-29 01:03:40.512607 | orchestrator | 2025-05-29 01:03:40 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:03:43.554499 | orchestrator | 2025-05-29 01:03:43 | INFO  | Task e577f5df-e622-48fb-812d-b74ae9042bd6 is in state STARTED 2025-05-29 01:03:43.554630 | orchestrator | 2025-05-29 01:03:43 | INFO  | Task c5a4286f-cc55-400d-a696-dbd74f6afb4d is in state STARTED 2025-05-29 01:03:43.554895 | orchestrator | 2025-05-29 01:03:43 | INFO  | Task 714be9b6-46c8-4528-8b44-60523f1aad38 is in state STARTED 2025-05-29 01:03:43.555573 | orchestrator | 2025-05-29 01:03:43 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:03:43.556857 | orchestrator | 2025-05-29 01:03:43 | INFO  | Task 222c4d4a-bc6d-472e-867f-3f5b90e5bb15 is in state STARTED 2025-05-29 01:03:43.556888 | orchestrator | 2025-05-29 01:03:43 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:03:46.591427 | orchestrator | 2025-05-29 01:03:46 | INFO  | Task e577f5df-e622-48fb-812d-b74ae9042bd6 is in state STARTED 2025-05-29 01:03:46.591528 | orchestrator | 2025-05-29 01:03:46 | INFO  | Task c5a4286f-cc55-400d-a696-dbd74f6afb4d is in state STARTED 2025-05-29 01:03:46.591543 | orchestrator | 2025-05-29 01:03:46 | INFO  | Task 714be9b6-46c8-4528-8b44-60523f1aad38 is in state STARTED 2025-05-29 01:03:46.591555 | orchestrator | 2025-05-29 01:03:46 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:03:46.592075 | orchestrator | 2025-05-29 01:03:46 | INFO  | Task 222c4d4a-bc6d-472e-867f-3f5b90e5bb15 is in state STARTED 2025-05-29 01:03:46.592296 | orchestrator | 2025-05-29 01:03:46 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:03:49.622626 | orchestrator | 2025-05-29 01:03:49 | INFO  | Task e577f5df-e622-48fb-812d-b74ae9042bd6 is in state STARTED 2025-05-29 01:03:49.622829 | orchestrator | 2025-05-29 01:03:49 | INFO  | Task c5a4286f-cc55-400d-a696-dbd74f6afb4d is in state STARTED 2025-05-29 01:03:49.623429 | orchestrator | 2025-05-29 01:03:49 | INFO  | Task 714be9b6-46c8-4528-8b44-60523f1aad38 is in state STARTED 2025-05-29 01:03:49.623923 | orchestrator | 2025-05-29 01:03:49 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:03:49.625665 | orchestrator | 2025-05-29 01:03:49 | INFO  | Task 222c4d4a-bc6d-472e-867f-3f5b90e5bb15 is in state STARTED 2025-05-29 01:03:49.625687 | orchestrator | 2025-05-29 01:03:49 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:03:52.652426 | orchestrator | 2025-05-29 01:03:52 | INFO  | Task e577f5df-e622-48fb-812d-b74ae9042bd6 is in state STARTED 2025-05-29 01:03:52.652667 | orchestrator | 2025-05-29 01:03:52 | INFO  | Task c5a4286f-cc55-400d-a696-dbd74f6afb4d is in state STARTED 2025-05-29 01:03:52.652894 | orchestrator | 2025-05-29 01:03:52 | INFO  | Task 714be9b6-46c8-4528-8b44-60523f1aad38 is in state STARTED 2025-05-29 01:03:52.653466 | orchestrator | 2025-05-29 01:03:52 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:03:52.654729 | orchestrator | 2025-05-29 01:03:52 | INFO  | Task 222c4d4a-bc6d-472e-867f-3f5b90e5bb15 is in state STARTED 2025-05-29 01:03:52.654767 | orchestrator | 2025-05-29 01:03:52 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:03:55.698581 | orchestrator | 2025-05-29 01:03:55 | INFO  | Task e577f5df-e622-48fb-812d-b74ae9042bd6 is in state STARTED 2025-05-29 01:03:55.698682 | orchestrator | 2025-05-29 01:03:55 | INFO  | Task c5a4286f-cc55-400d-a696-dbd74f6afb4d is in state STARTED 2025-05-29 01:03:55.699173 | orchestrator | 2025-05-29 01:03:55 | INFO  | Task 714be9b6-46c8-4528-8b44-60523f1aad38 is in state STARTED 2025-05-29 01:03:55.699704 | orchestrator | 2025-05-29 01:03:55 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:03:55.701547 | orchestrator | 2025-05-29 01:03:55 | INFO  | Task 222c4d4a-bc6d-472e-867f-3f5b90e5bb15 is in state STARTED 2025-05-29 01:03:55.701574 | orchestrator | 2025-05-29 01:03:55 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:03:58.732850 | orchestrator | 2025-05-29 01:03:58 | INFO  | Task e577f5df-e622-48fb-812d-b74ae9042bd6 is in state STARTED 2025-05-29 01:03:58.733113 | orchestrator | 2025-05-29 01:03:58 | INFO  | Task c5a4286f-cc55-400d-a696-dbd74f6afb4d is in state STARTED 2025-05-29 01:03:58.733149 | orchestrator | 2025-05-29 01:03:58 | INFO  | Task 714be9b6-46c8-4528-8b44-60523f1aad38 is in state STARTED 2025-05-29 01:03:58.733504 | orchestrator | 2025-05-29 01:03:58 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:03:58.735873 | orchestrator | 2025-05-29 01:03:58 | INFO  | Task 222c4d4a-bc6d-472e-867f-3f5b90e5bb15 is in state STARTED 2025-05-29 01:03:58.735944 | orchestrator | 2025-05-29 01:03:58 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:04:01.758809 | orchestrator | 2025-05-29 01:04:01 | INFO  | Task e577f5df-e622-48fb-812d-b74ae9042bd6 is in state STARTED 2025-05-29 01:04:01.759106 | orchestrator | 2025-05-29 01:04:01 | INFO  | Task c5a4286f-cc55-400d-a696-dbd74f6afb4d is in state STARTED 2025-05-29 01:04:01.762705 | orchestrator | 2025-05-29 01:04:01 | INFO  | Task 714be9b6-46c8-4528-8b44-60523f1aad38 is in state STARTED 2025-05-29 01:04:01.763082 | orchestrator | 2025-05-29 01:04:01 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:04:01.764020 | orchestrator | 2025-05-29 01:04:01 | INFO  | Task 222c4d4a-bc6d-472e-867f-3f5b90e5bb15 is in state STARTED 2025-05-29 01:04:01.764107 | orchestrator | 2025-05-29 01:04:01 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:04:04.786698 | orchestrator | 2025-05-29 01:04:04 | INFO  | Task e577f5df-e622-48fb-812d-b74ae9042bd6 is in state STARTED 2025-05-29 01:04:04.786804 | orchestrator | 2025-05-29 01:04:04 | INFO  | Task c5a4286f-cc55-400d-a696-dbd74f6afb4d is in state STARTED 2025-05-29 01:04:04.787144 | orchestrator | 2025-05-29 01:04:04 | INFO  | Task 714be9b6-46c8-4528-8b44-60523f1aad38 is in state STARTED 2025-05-29 01:04:04.788549 | orchestrator | 2025-05-29 01:04:04 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:04:04.788597 | orchestrator | 2025-05-29 01:04:04 | INFO  | Task 222c4d4a-bc6d-472e-867f-3f5b90e5bb15 is in state STARTED 2025-05-29 01:04:04.788605 | orchestrator | 2025-05-29 01:04:04 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:04:07.817859 | orchestrator | 2025-05-29 01:04:07 | INFO  | Task e577f5df-e622-48fb-812d-b74ae9042bd6 is in state STARTED 2025-05-29 01:04:07.817946 | orchestrator | 2025-05-29 01:04:07 | INFO  | Task c5a4286f-cc55-400d-a696-dbd74f6afb4d is in state STARTED 2025-05-29 01:04:07.818009 | orchestrator | 2025-05-29 01:04:07 | INFO  | Task 940e2d1a-6b53-44b5-8f26-314eae722d11 is in state STARTED 2025-05-29 01:04:07.818076 | orchestrator | 2025-05-29 01:04:07 | INFO  | Task 714be9b6-46c8-4528-8b44-60523f1aad38 is in state STARTED 2025-05-29 01:04:07.818088 | orchestrator | 2025-05-29 01:04:07 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:04:07.818099 | orchestrator | 2025-05-29 01:04:07 | INFO  | Task 222c4d4a-bc6d-472e-867f-3f5b90e5bb15 is in state STARTED 2025-05-29 01:04:07.818110 | orchestrator | 2025-05-29 01:04:07 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:04:10.846648 | orchestrator | 2025-05-29 01:04:10 | INFO  | Task e577f5df-e622-48fb-812d-b74ae9042bd6 is in state STARTED 2025-05-29 01:04:10.847650 | orchestrator | 2025-05-29 01:04:10 | INFO  | Task c5a4286f-cc55-400d-a696-dbd74f6afb4d is in state STARTED 2025-05-29 01:04:10.848084 | orchestrator | 2025-05-29 01:04:10 | INFO  | Task 940e2d1a-6b53-44b5-8f26-314eae722d11 is in state STARTED 2025-05-29 01:04:10.848503 | orchestrator | 2025-05-29 01:04:10 | INFO  | Task 714be9b6-46c8-4528-8b44-60523f1aad38 is in state STARTED 2025-05-29 01:04:10.849066 | orchestrator | 2025-05-29 01:04:10 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:04:10.850336 | orchestrator | 2025-05-29 01:04:10 | INFO  | Task 222c4d4a-bc6d-472e-867f-3f5b90e5bb15 is in state STARTED 2025-05-29 01:04:10.850415 | orchestrator | 2025-05-29 01:04:10 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:04:13.880490 | orchestrator | 2025-05-29 01:04:13 | INFO  | Task e577f5df-e622-48fb-812d-b74ae9042bd6 is in state STARTED 2025-05-29 01:04:13.880632 | orchestrator | 2025-05-29 01:04:13 | INFO  | Task c5a4286f-cc55-400d-a696-dbd74f6afb4d is in state STARTED 2025-05-29 01:04:13.882366 | orchestrator | 2025-05-29 01:04:13 | INFO  | Task 940e2d1a-6b53-44b5-8f26-314eae722d11 is in state STARTED 2025-05-29 01:04:13.882395 | orchestrator | 2025-05-29 01:04:13 | INFO  | Task 714be9b6-46c8-4528-8b44-60523f1aad38 is in state STARTED 2025-05-29 01:04:13.882407 | orchestrator | 2025-05-29 01:04:13 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:04:13.882418 | orchestrator | 2025-05-29 01:04:13 | INFO  | Task 222c4d4a-bc6d-472e-867f-3f5b90e5bb15 is in state STARTED 2025-05-29 01:04:13.882437 | orchestrator | 2025-05-29 01:04:13 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:04:16.915630 | orchestrator | 2025-05-29 01:04:16 | INFO  | Task e577f5df-e622-48fb-812d-b74ae9042bd6 is in state STARTED 2025-05-29 01:04:16.915743 | orchestrator | 2025-05-29 01:04:16 | INFO  | Task c5a4286f-cc55-400d-a696-dbd74f6afb4d is in state STARTED 2025-05-29 01:04:16.915757 | orchestrator | 2025-05-29 01:04:16 | INFO  | Task 940e2d1a-6b53-44b5-8f26-314eae722d11 is in state SUCCESS 2025-05-29 01:04:16.916022 | orchestrator | 2025-05-29 01:04:16 | INFO  | Task 714be9b6-46c8-4528-8b44-60523f1aad38 is in state STARTED 2025-05-29 01:04:16.916645 | orchestrator | 2025-05-29 01:04:16 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:04:16.917962 | orchestrator | 2025-05-29 01:04:16 | INFO  | Task 222c4d4a-bc6d-472e-867f-3f5b90e5bb15 is in state STARTED 2025-05-29 01:04:16.917983 | orchestrator | 2025-05-29 01:04:16 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:04:19.958263 | orchestrator | 2025-05-29 01:04:19 | INFO  | Task e577f5df-e622-48fb-812d-b74ae9042bd6 is in state STARTED 2025-05-29 01:04:19.963278 | orchestrator | 2025-05-29 01:04:19 | INFO  | Task c5a4286f-cc55-400d-a696-dbd74f6afb4d is in state STARTED 2025-05-29 01:04:19.963612 | orchestrator | 2025-05-29 01:04:19 | INFO  | Task 714be9b6-46c8-4528-8b44-60523f1aad38 is in state STARTED 2025-05-29 01:04:19.963995 | orchestrator | 2025-05-29 01:04:19 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:04:19.965073 | orchestrator | 2025-05-29 01:04:19 | INFO  | Task 222c4d4a-bc6d-472e-867f-3f5b90e5bb15 is in state STARTED 2025-05-29 01:04:19.965119 | orchestrator | 2025-05-29 01:04:19 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:04:22.996913 | orchestrator | 2025-05-29 01:04:22 | INFO  | Task e577f5df-e622-48fb-812d-b74ae9042bd6 is in state STARTED 2025-05-29 01:04:23.004098 | orchestrator | 2025-05-29 01:04:23 | INFO  | Task c5a4286f-cc55-400d-a696-dbd74f6afb4d is in state STARTED 2025-05-29 01:04:23.004935 | orchestrator | 2025-05-29 01:04:23 | INFO  | Task 714be9b6-46c8-4528-8b44-60523f1aad38 is in state STARTED 2025-05-29 01:04:23.006979 | orchestrator | 2025-05-29 01:04:23 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:04:23.007417 | orchestrator | 2025-05-29 01:04:23 | INFO  | Task 222c4d4a-bc6d-472e-867f-3f5b90e5bb15 is in state STARTED 2025-05-29 01:04:23.007529 | orchestrator | 2025-05-29 01:04:23 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:04:26.042695 | orchestrator | 2025-05-29 01:04:26 | INFO  | Task e577f5df-e622-48fb-812d-b74ae9042bd6 is in state STARTED 2025-05-29 01:04:26.044169 | orchestrator | 2025-05-29 01:04:26 | INFO  | Task c5a4286f-cc55-400d-a696-dbd74f6afb4d is in state STARTED 2025-05-29 01:04:26.045019 | orchestrator | 2025-05-29 01:04:26 | INFO  | Task 714be9b6-46c8-4528-8b44-60523f1aad38 is in state STARTED 2025-05-29 01:04:26.045089 | orchestrator | 2025-05-29 01:04:26 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:04:26.046066 | orchestrator | 2025-05-29 01:04:26 | INFO  | Task 222c4d4a-bc6d-472e-867f-3f5b90e5bb15 is in state STARTED 2025-05-29 01:04:26.046093 | orchestrator | 2025-05-29 01:04:26 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:04:29.074868 | orchestrator | 2025-05-29 01:04:29 | INFO  | Task e577f5df-e622-48fb-812d-b74ae9042bd6 is in state STARTED 2025-05-29 01:04:29.075122 | orchestrator | 2025-05-29 01:04:29 | INFO  | Task c5a4286f-cc55-400d-a696-dbd74f6afb4d is in state STARTED 2025-05-29 01:04:29.075544 | orchestrator | 2025-05-29 01:04:29 | INFO  | Task 714be9b6-46c8-4528-8b44-60523f1aad38 is in state STARTED 2025-05-29 01:04:29.076039 | orchestrator | 2025-05-29 01:04:29 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:04:29.076595 | orchestrator | 2025-05-29 01:04:29 | INFO  | Task 222c4d4a-bc6d-472e-867f-3f5b90e5bb15 is in state STARTED 2025-05-29 01:04:29.076620 | orchestrator | 2025-05-29 01:04:29 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:04:32.107477 | orchestrator | 2025-05-29 01:04:32 | INFO  | Task e577f5df-e622-48fb-812d-b74ae9042bd6 is in state STARTED 2025-05-29 01:04:32.107727 | orchestrator | 2025-05-29 01:04:32 | INFO  | Task c5a4286f-cc55-400d-a696-dbd74f6afb4d is in state STARTED 2025-05-29 01:04:32.108235 | orchestrator | 2025-05-29 01:04:32 | INFO  | Task 714be9b6-46c8-4528-8b44-60523f1aad38 is in state STARTED 2025-05-29 01:04:32.108761 | orchestrator | 2025-05-29 01:04:32 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:04:32.109411 | orchestrator | 2025-05-29 01:04:32 | INFO  | Task 222c4d4a-bc6d-472e-867f-3f5b90e5bb15 is in state STARTED 2025-05-29 01:04:32.109439 | orchestrator | 2025-05-29 01:04:32 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:04:35.148096 | orchestrator | 2025-05-29 01:04:35 | INFO  | Task e577f5df-e622-48fb-812d-b74ae9042bd6 is in state STARTED 2025-05-29 01:04:35.148189 | orchestrator | 2025-05-29 01:04:35 | INFO  | Task c5a4286f-cc55-400d-a696-dbd74f6afb4d is in state STARTED 2025-05-29 01:04:35.149471 | orchestrator | 2025-05-29 01:04:35 | INFO  | Task 714be9b6-46c8-4528-8b44-60523f1aad38 is in state STARTED 2025-05-29 01:04:35.149836 | orchestrator | 2025-05-29 01:04:35 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:04:35.151771 | orchestrator | 2025-05-29 01:04:35 | INFO  | Task 222c4d4a-bc6d-472e-867f-3f5b90e5bb15 is in state STARTED 2025-05-29 01:04:35.151853 | orchestrator | 2025-05-29 01:04:35 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:04:38.193159 | orchestrator | 2025-05-29 01:04:38 | INFO  | Task e577f5df-e622-48fb-812d-b74ae9042bd6 is in state SUCCESS 2025-05-29 01:04:38.194366 | orchestrator | 2025-05-29 01:04:38.194416 | orchestrator | 2025-05-29 01:04:38.194429 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-29 01:04:38.194441 | orchestrator | 2025-05-29 01:04:38.194452 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-29 01:04:38.194464 | orchestrator | Thursday 29 May 2025 01:03:29 +0000 (0:00:00.200) 0:00:00.200 ********** 2025-05-29 01:04:38.194475 | orchestrator | ok: [testbed-node-0] 2025-05-29 01:04:38.194487 | orchestrator | ok: [testbed-node-1] 2025-05-29 01:04:38.194498 | orchestrator | ok: [testbed-node-2] 2025-05-29 01:04:38.194508 | orchestrator | 2025-05-29 01:04:38.194519 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-29 01:04:38.194530 | orchestrator | Thursday 29 May 2025 01:03:30 +0000 (0:00:00.401) 0:00:00.602 ********** 2025-05-29 01:04:38.194568 | orchestrator | ok: [testbed-node-0] => (item=enable_keystone_True) 2025-05-29 01:04:38.194580 | orchestrator | ok: [testbed-node-1] => (item=enable_keystone_True) 2025-05-29 01:04:38.194591 | orchestrator | ok: [testbed-node-2] => (item=enable_keystone_True) 2025-05-29 01:04:38.194602 | orchestrator | 2025-05-29 01:04:38.194612 | orchestrator | PLAY [Wait for the Keystone service] ******************************************* 2025-05-29 01:04:38.194623 | orchestrator | 2025-05-29 01:04:38.195035 | orchestrator | TASK [Waiting for Keystone public port to be UP] ******************************* 2025-05-29 01:04:38.195049 | orchestrator | Thursday 29 May 2025 01:03:31 +0000 (0:00:00.841) 0:00:01.443 ********** 2025-05-29 01:04:38.195060 | orchestrator | ok: [testbed-node-2] 2025-05-29 01:04:38.195071 | orchestrator | ok: [testbed-node-0] 2025-05-29 01:04:38.195082 | orchestrator | ok: [testbed-node-1] 2025-05-29 01:04:38.195093 | orchestrator | 2025-05-29 01:04:38.195104 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-29 01:04:38.195115 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-29 01:04:38.195127 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-29 01:04:38.195138 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-29 01:04:38.195149 | orchestrator | 2025-05-29 01:04:38.195160 | orchestrator | 2025-05-29 01:04:38.195229 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-29 01:04:38.195242 | orchestrator | Thursday 29 May 2025 01:03:32 +0000 (0:00:01.085) 0:00:02.529 ********** 2025-05-29 01:04:38.195253 | orchestrator | =============================================================================== 2025-05-29 01:04:38.195264 | orchestrator | Waiting for Keystone public port to be UP ------------------------------- 1.09s 2025-05-29 01:04:38.195274 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.84s 2025-05-29 01:04:38.195285 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.40s 2025-05-29 01:04:38.195295 | orchestrator | 2025-05-29 01:04:38.195306 | orchestrator | None 2025-05-29 01:04:38.195317 | orchestrator | 2025-05-29 01:04:38.195328 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-29 01:04:38.195338 | orchestrator | 2025-05-29 01:04:38.195349 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-29 01:04:38.195360 | orchestrator | Thursday 29 May 2025 01:01:15 +0000 (0:00:00.371) 0:00:00.371 ********** 2025-05-29 01:04:38.195370 | orchestrator | ok: [testbed-node-0] 2025-05-29 01:04:38.195381 | orchestrator | ok: [testbed-node-1] 2025-05-29 01:04:38.195392 | orchestrator | ok: [testbed-node-2] 2025-05-29 01:04:38.195402 | orchestrator | 2025-05-29 01:04:38.195413 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-29 01:04:38.195424 | orchestrator | Thursday 29 May 2025 01:01:16 +0000 (0:00:00.753) 0:00:01.125 ********** 2025-05-29 01:04:38.195435 | orchestrator | ok: [testbed-node-0] => (item=enable_designate_True) 2025-05-29 01:04:38.195573 | orchestrator | ok: [testbed-node-1] => (item=enable_designate_True) 2025-05-29 01:04:38.195585 | orchestrator | ok: [testbed-node-2] => (item=enable_designate_True) 2025-05-29 01:04:38.195596 | orchestrator | 2025-05-29 01:04:38.195607 | orchestrator | PLAY [Apply role designate] **************************************************** 2025-05-29 01:04:38.195617 | orchestrator | 2025-05-29 01:04:38.195628 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-05-29 01:04:38.195641 | orchestrator | Thursday 29 May 2025 01:01:16 +0000 (0:00:00.484) 0:00:01.609 ********** 2025-05-29 01:04:38.195654 | orchestrator | included: /ansible/roles/designate/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-29 01:04:38.195667 | orchestrator | 2025-05-29 01:04:38.195678 | orchestrator | TASK [service-ks-register : designate | Creating services] ********************* 2025-05-29 01:04:38.195691 | orchestrator | Thursday 29 May 2025 01:01:17 +0000 (0:00:00.738) 0:00:02.348 ********** 2025-05-29 01:04:38.195715 | orchestrator | changed: [testbed-node-0] => (item=designate (dns)) 2025-05-29 01:04:38.195728 | orchestrator | 2025-05-29 01:04:38.195740 | orchestrator | TASK [service-ks-register : designate | Creating endpoints] ******************** 2025-05-29 01:04:38.195753 | orchestrator | Thursday 29 May 2025 01:01:21 +0000 (0:00:03.821) 0:00:06.170 ********** 2025-05-29 01:04:38.195766 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api-int.testbed.osism.xyz:9001 -> internal) 2025-05-29 01:04:38.195779 | orchestrator | changed: [testbed-node-0] => (item=designate -> https://api.testbed.osism.xyz:9001 -> public) 2025-05-29 01:04:38.195792 | orchestrator | 2025-05-29 01:04:38.195804 | orchestrator | TASK [service-ks-register : designate | Creating projects] ********************* 2025-05-29 01:04:38.195816 | orchestrator | Thursday 29 May 2025 01:01:27 +0000 (0:00:06.577) 0:00:12.747 ********** 2025-05-29 01:04:38.195828 | orchestrator | changed: [testbed-node-0] => (item=service) 2025-05-29 01:04:38.195841 | orchestrator | 2025-05-29 01:04:38.195853 | orchestrator | TASK [service-ks-register : designate | Creating users] ************************ 2025-05-29 01:04:38.195865 | orchestrator | Thursday 29 May 2025 01:01:31 +0000 (0:00:03.442) 0:00:16.189 ********** 2025-05-29 01:04:38.195890 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-29 01:04:38.195904 | orchestrator | changed: [testbed-node-0] => (item=designate -> service) 2025-05-29 01:04:38.195916 | orchestrator | 2025-05-29 01:04:38.195948 | orchestrator | TASK [service-ks-register : designate | Creating roles] ************************ 2025-05-29 01:04:38.195962 | orchestrator | Thursday 29 May 2025 01:01:34 +0000 (0:00:03.820) 0:00:20.010 ********** 2025-05-29 01:04:38.196006 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-05-29 01:04:38.196021 | orchestrator | 2025-05-29 01:04:38.196034 | orchestrator | TASK [service-ks-register : designate | Granting user roles] ******************* 2025-05-29 01:04:38.196046 | orchestrator | Thursday 29 May 2025 01:01:38 +0000 (0:00:03.188) 0:00:23.199 ********** 2025-05-29 01:04:38.196059 | orchestrator | changed: [testbed-node-0] => (item=designate -> service -> admin) 2025-05-29 01:04:38.196070 | orchestrator | 2025-05-29 01:04:38.196081 | orchestrator | TASK [designate : Ensuring config directories exist] *************************** 2025-05-29 01:04:38.196091 | orchestrator | Thursday 29 May 2025 01:01:42 +0000 (0:00:04.185) 0:00:27.385 ********** 2025-05-29 01:04:38.196121 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-29 01:04:38.196147 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-29 01:04:38.196177 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-29 01:04:38.196196 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-29 01:04:38.196233 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-29 01:04:38.196253 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-29 01:04:38.196283 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-29 01:04:38.196305 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-29 01:04:38.196336 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-29 01:04:38.196355 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-29 01:04:38.196381 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-29 01:04:38.196394 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-29 01:04:38.196405 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-29 01:04:38.196422 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-29 01:04:38.196433 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-29 01:04:38.196588 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-29 01:04:38.196603 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-29 01:04:38.196624 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-29 01:04:38.196636 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-29 01:04:38.196648 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-29 01:04:38.196664 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-29 01:04:38.196684 | orchestrator | 2025-05-29 01:04:38.196696 | orchestrator | TASK [designate : Check if policies shall be overwritten] ********************** 2025-05-29 01:04:38.196707 | orchestrator | Thursday 29 May 2025 01:01:45 +0000 (0:00:03.342) 0:00:30.727 ********** 2025-05-29 01:04:38.196718 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:04:38.196729 | orchestrator | 2025-05-29 01:04:38.196740 | orchestrator | TASK [designate : Set designate policy file] *********************************** 2025-05-29 01:04:38.196750 | orchestrator | Thursday 29 May 2025 01:01:45 +0000 (0:00:00.118) 0:00:30.846 ********** 2025-05-29 01:04:38.196761 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:04:38.196772 | orchestrator | skipping: [testbed-node-1] 2025-05-29 01:04:38.196783 | orchestrator | skipping: [testbed-node-2] 2025-05-29 01:04:38.196793 | orchestrator | 2025-05-29 01:04:38.196804 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-05-29 01:04:38.196814 | orchestrator | Thursday 29 May 2025 01:01:46 +0000 (0:00:00.410) 0:00:31.256 ********** 2025-05-29 01:04:38.196825 | orchestrator | included: /ansible/roles/designate/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-29 01:04:38.196836 | orchestrator | 2025-05-29 01:04:38.196846 | orchestrator | TASK [service-cert-copy : designate | Copying over extra CA certificates] ****** 2025-05-29 01:04:38.196857 | orchestrator | Thursday 29 May 2025 01:01:46 +0000 (0:00:00.611) 0:00:31.868 ********** 2025-05-29 01:04:38.196869 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-29 01:04:38.196888 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-29 01:04:38.196901 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-29 01:04:38.196924 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-29 01:04:38.197019 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-29 01:04:38.197031 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-29 01:04:38.197050 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-29 01:04:38.197061 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-29 01:04:38.197071 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-29 01:04:38.197099 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-29 01:04:38.197110 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-29 01:04:38.197120 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-29 01:04:38.197130 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-29 01:04:38.197146 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-29 01:04:38.197156 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-29 01:04:38.197166 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-29 01:04:38.197190 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-29 01:04:38.197200 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-29 01:04:38.197210 | orchestrator | 2025-05-29 01:04:38.197220 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS certificate] *** 2025-05-29 01:04:38.197230 | orchestrator | Thursday 29 May 2025 01:01:52 +0000 (0:00:06.217) 0:00:38.085 ********** 2025-05-29 01:04:38.197239 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-29 01:04:38.197257 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-29 01:04:38.197268 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-29 01:04:38.197284 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-29 01:04:38.197298 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-29 01:04:38.197316 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-29 01:04:38.197352 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:04:38.197369 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-29 01:04:38.197393 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-29 01:04:38.197410 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-29 01:04:38.197436 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-29 01:04:38.197454 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-29 01:04:38.197473 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-29 01:04:38.197490 | orchestrator | skipping: [testbed-node-2] 2025-05-29 01:04:38.197555 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-29 01:04:38.197576 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-29 01:04:38.197586 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-29 01:04:38.197604 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-29 01:04:38.197619 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-29 01:04:38.197629 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-29 01:04:38.197640 | orchestrator | skipping: [testbed-node-1] 2025-05-29 01:04:38.197649 | orchestrator | 2025-05-29 01:04:38.197659 | orchestrator | TASK [service-cert-copy : designate | Copying over backend internal TLS key] *** 2025-05-29 01:04:38.197669 | orchestrator | Thursday 29 May 2025 01:01:55 +0000 (0:00:02.165) 0:00:40.250 ********** 2025-05-29 01:04:38.197679 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-29 01:04:38.197694 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-29 01:04:38.197710 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-29 01:04:38.197720 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-29 01:04:38.197735 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-29 01:04:38.197745 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-29 01:04:38.197755 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:04:38.197765 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-29 01:04:38.197775 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-29 01:04:38.197797 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-29 01:04:38.197808 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-29 01:04:38.197822 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-29 01:04:38.197832 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-29 01:04:38.197842 | orchestrator | skipping: [testbed-node-1] 2025-05-29 01:04:38.197852 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-29 01:04:38.197863 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-29 01:04:38.197883 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-29 01:04:38.197894 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-29 01:04:38.197909 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-29 01:04:38.197920 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-29 01:04:38.197983 | orchestrator | skipping: [testbed-node-2] 2025-05-29 01:04:38.197994 | orchestrator | 2025-05-29 01:04:38.198004 | orchestrator | TASK [designate : Copying over config.json files for services] ***************** 2025-05-29 01:04:38.198066 | orchestrator | Thursday 29 May 2025 01:01:56 +0000 (0:00:01.798) 0:00:42.049 ********** 2025-05-29 01:04:38.198081 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-29 01:04:38.198110 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-29 01:04:38.198121 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-29 01:04:38.198137 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-29 01:04:38.198147 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-29 01:04:38.198157 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-29 01:04:38.198167 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-29 01:04:38.198189 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-29 01:04:38.198200 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-29 01:04:38.198210 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-29 01:04:38.198225 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-29 01:04:38.198235 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-29 01:04:38.198245 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-29 01:04:38.198261 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-29 01:04:38.198277 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-29 01:04:38.198288 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-29 01:04:38.198298 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-29 01:04:38.198976 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-29 01:04:38.199074 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-29 01:04:38.199091 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-29 01:04:38.199125 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-29 01:04:38.199139 | orchestrator | 2025-05-29 01:04:38.199152 | orchestrator | TASK [designate : Copying over designate.conf] ********************************* 2025-05-29 01:04:38.199165 | orchestrator | Thursday 29 May 2025 01:02:03 +0000 (0:00:06.365) 0:00:48.414 ********** 2025-05-29 01:04:38.199177 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-29 01:04:38.199197 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-29 01:04:38.199227 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-29 01:04:38.199240 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-29 01:04:38.199260 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-29 01:04:38.199273 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-29 01:04:38.199285 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-29 01:04:38.199301 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-29 01:04:38.199320 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-29 01:04:38.199332 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-29 01:04:38.199351 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-29 01:04:38.199363 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-29 01:04:38.199374 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-29 01:04:38.199386 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-29 01:04:38.199404 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-29 01:04:38.199424 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-29 01:04:38.199443 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-29 01:04:38.199457 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-29 01:04:38.199470 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-29 01:04:38.199484 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-29 01:04:38.199497 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-29 01:04:38.199511 | orchestrator | 2025-05-29 01:04:38.199524 | orchestrator | TASK [designate : Copying over pools.yaml] ************************************* 2025-05-29 01:04:38.199538 | orchestrator | Thursday 29 May 2025 01:02:29 +0000 (0:00:26.124) 0:01:14.539 ********** 2025-05-29 01:04:38.199551 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-05-29 01:04:38.199569 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-05-29 01:04:38.199582 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/pools.yaml.j2) 2025-05-29 01:04:38.199595 | orchestrator | 2025-05-29 01:04:38.199608 | orchestrator | TASK [designate : Copying over named.conf] ************************************* 2025-05-29 01:04:38.199621 | orchestrator | Thursday 29 May 2025 01:02:36 +0000 (0:00:07.447) 0:01:21.986 ********** 2025-05-29 01:04:38.199640 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-05-29 01:04:38.199660 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-05-29 01:04:38.199673 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/designate/templates/named.conf.j2) 2025-05-29 01:04:38.199686 | orchestrator | 2025-05-29 01:04:38.199699 | orchestrator | TASK [designate : Copying over rndc.conf] ************************************** 2025-05-29 01:04:38.199712 | orchestrator | Thursday 29 May 2025 01:02:42 +0000 (0:00:05.553) 0:01:27.539 ********** 2025-05-29 01:04:38.199725 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-29 01:04:38.199740 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-29 01:04:38.199754 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-29 01:04:38.199768 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-29 01:04:38.199801 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-29 01:04:38.199814 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-29 01:04:38.199826 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-29 01:04:38.199837 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-29 01:04:38.199849 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-29 01:04:38.199861 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-29 01:04:38.199877 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-29 01:04:38.199904 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-29 01:04:38.199917 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-29 01:04:38.199951 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-29 01:04:38.199972 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-29 01:04:38.199993 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-29 01:04:38.200013 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-29 01:04:38.200048 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-29 01:04:38.200068 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-29 01:04:38.200080 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-29 01:04:38.200092 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-29 01:04:38.200103 | orchestrator | 2025-05-29 01:04:38.200115 | orchestrator | TASK [designate : Copying over rndc.key] *************************************** 2025-05-29 01:04:38.200126 | orchestrator | Thursday 29 May 2025 01:02:47 +0000 (0:00:04.761) 0:01:32.301 ********** 2025-05-29 01:04:38.200137 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-29 01:04:38.200149 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-29 01:04:38.200179 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-29 01:04:38.200191 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-29 01:04:38.200203 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-29 01:04:38.200215 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-29 01:04:38.200226 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-29 01:04:38.200245 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-29 01:04:38.200267 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-29 01:04:38.200279 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-29 01:04:38.200291 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-29 01:04:38.200302 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-29 01:04:38.200314 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-29 01:04:38.200325 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-29 01:04:38.200349 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-29 01:04:38.200367 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-29 01:04:38.200379 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-29 01:04:38.200391 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-29 01:04:38.200403 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-29 01:04:38.200414 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-29 01:04:38.200432 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-29 01:04:38.200444 | orchestrator | 2025-05-29 01:04:38.200455 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-05-29 01:04:38.200466 | orchestrator | Thursday 29 May 2025 01:02:50 +0000 (0:00:02.970) 0:01:35.271 ********** 2025-05-29 01:04:38.200477 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:04:38.200489 | orchestrator | skipping: [testbed-node-1] 2025-05-29 01:04:38.200500 | orchestrator | skipping: [testbed-node-2] 2025-05-29 01:04:38.200511 | orchestrator | 2025-05-29 01:04:38.200522 | orchestrator | TASK [designate : Copying over existing policy file] *************************** 2025-05-29 01:04:38.200537 | orchestrator | Thursday 29 May 2025 01:02:50 +0000 (0:00:00.753) 0:01:36.025 ********** 2025-05-29 01:04:38.200555 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-29 01:04:38.200567 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-29 01:04:38.200579 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-29 01:04:38.200591 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-29 01:04:38.200609 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-29 01:04:38.200621 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-29 01:04:38.200642 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-29 01:04:38.200654 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:04:38.200667 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-29 01:04:38.200678 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-29 01:04:38.200690 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-29 01:04:38.200707 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-29 01:04:38.200719 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-29 01:04:38.200735 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-29 01:04:38.200753 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-29 01:04:38.200765 | orchestrator | skipping: [testbed-node-1] 2025-05-29 01:04:38.200777 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}})  2025-05-29 01:04:38.200789 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}})  2025-05-29 01:04:38.200806 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}})  2025-05-29 01:04:38.200818 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}})  2025-05-29 01:04:38.200837 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}})  2025-05-29 01:04:38.200855 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}})  2025-05-29 01:04:38.200867 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-29 01:04:38.200879 | orchestrator | skipping: [testbed-node-2] 2025-05-29 01:04:38.200890 | orchestrator | 2025-05-29 01:04:38.200901 | orchestrator | TASK [designate : Check designate containers] ********************************** 2025-05-29 01:04:38.200913 | orchestrator | Thursday 29 May 2025 01:02:51 +0000 (0:00:00.947) 0:01:36.973 ********** 2025-05-29 01:04:38.200924 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-29 01:04:38.200968 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-29 01:04:38.200985 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-api', 'value': {'container_name': 'designate_api', 'group': 'designate-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-api:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9001'], 'timeout': '30'}, 'haproxy': {'designate_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9001', 'listen_port': '9001'}, 'designate_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9001', 'listen_port': '9001'}}}}) 2025-05-29 01:04:38.201005 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-29 01:04:38.201025 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-29 01:04:38.201044 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-backend-bind9', 'value': {'container_name': 'designate_backend_bind9', 'group': 'designate-backend-bind9', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-backend-bind9:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-backend-bind9/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'designate_backend_bind9:/var/lib/named/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen named 53'], 'timeout': '30'}}}) 2025-05-29 01:04:38.201073 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-29 01:04:38.201092 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-29 01:04:38.201110 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-central', 'value': {'container_name': 'designate_central', 'group': 'designate-central', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-central:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-central/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-central 5672'], 'timeout': '30'}}}) 2025-05-29 01:04:38.201418 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-29 01:04:38.201453 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-29 01:04:38.201474 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-mdns', 'value': {'container_name': 'designate_mdns', 'group': 'designate-mdns', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-mdns:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-mdns/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-mdns 5672'], 'timeout': '30'}}}) 2025-05-29 01:04:38.201507 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-29 01:04:38.201526 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-29 01:04:38.201545 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-producer', 'value': {'container_name': 'designate_producer', 'group': 'designate-producer', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-producer:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-producer/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-producer 5672'], 'timeout': '30'}}}) 2025-05-29 01:04:38.201572 | orchestrator | changed: [testbed-node-0] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-29 01:04:38.201604 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-29 01:04:38.201624 | orchestrator | changed: [testbed-node-2] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-29 01:04:38.201644 | orchestrator | changed: [testbed-node-1] => (item={'key': 'designate-worker', 'value': {'container_name': 'designate_worker', 'group': 'designate-worker', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/designate-worker:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-worker/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-worker 5672'], 'timeout': '30'}}}) 2025-05-29 01:04:38.201675 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-29 01:04:38.201695 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'designate-sink', 'value': {'container_name': 'designate_sink', 'group': 'designate-sink', 'enabled': False, 'image': 'registry.osism.tech/kolla/release/designate-sink:18.0.1.20241206', 'volumes': ['/etc/kolla/designate-sink/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port designate-sink 5672'], 'timeout': '30'}}})  2025-05-29 01:04:38.201715 | orchestrator | 2025-05-29 01:04:38.201734 | orchestrator | TASK [designate : include_tasks] *********************************************** 2025-05-29 01:04:38.201754 | orchestrator | Thursday 29 May 2025 01:02:57 +0000 (0:00:05.189) 0:01:42.162 ********** 2025-05-29 01:04:38.201773 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:04:38.201792 | orchestrator | skipping: [testbed-node-1] 2025-05-29 01:04:38.201810 | orchestrator | skipping: [testbed-node-2] 2025-05-29 01:04:38.201830 | orchestrator | 2025-05-29 01:04:38.201848 | orchestrator | TASK [designate : Creating Designate databases] ******************************** 2025-05-29 01:04:38.201867 | orchestrator | Thursday 29 May 2025 01:02:57 +0000 (0:00:00.859) 0:01:43.022 ********** 2025-05-29 01:04:38.201886 | orchestrator | changed: [testbed-node-0] => (item=designate) 2025-05-29 01:04:38.201905 | orchestrator | 2025-05-29 01:04:38.201924 | orchestrator | TASK [designate : Creating Designate databases user and setting permissions] *** 2025-05-29 01:04:38.201970 | orchestrator | Thursday 29 May 2025 01:03:00 +0000 (0:00:02.466) 0:01:45.489 ********** 2025-05-29 01:04:38.201989 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-05-29 01:04:38.202008 | orchestrator | changed: [testbed-node-0 -> {{ groups['designate-central'][0] }}] 2025-05-29 01:04:38.202075 | orchestrator | 2025-05-29 01:04:38.202095 | orchestrator | TASK [designate : Running Designate bootstrap container] *********************** 2025-05-29 01:04:38.202115 | orchestrator | Thursday 29 May 2025 01:03:03 +0000 (0:00:02.717) 0:01:48.206 ********** 2025-05-29 01:04:38.202134 | orchestrator | changed: [testbed-node-0] 2025-05-29 01:04:38.202154 | orchestrator | 2025-05-29 01:04:38.202182 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-05-29 01:04:38.202202 | orchestrator | Thursday 29 May 2025 01:03:18 +0000 (0:00:15.334) 0:02:03.541 ********** 2025-05-29 01:04:38.202222 | orchestrator | 2025-05-29 01:04:38.202241 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-05-29 01:04:38.202260 | orchestrator | Thursday 29 May 2025 01:03:18 +0000 (0:00:00.118) 0:02:03.660 ********** 2025-05-29 01:04:38.202280 | orchestrator | 2025-05-29 01:04:38.202299 | orchestrator | TASK [designate : Flush handlers] ********************************************** 2025-05-29 01:04:38.202328 | orchestrator | Thursday 29 May 2025 01:03:18 +0000 (0:00:00.113) 0:02:03.773 ********** 2025-05-29 01:04:38.202359 | orchestrator | 2025-05-29 01:04:38.202379 | orchestrator | RUNNING HANDLER [designate : Restart designate-backend-bind9 container] ******** 2025-05-29 01:04:38.202399 | orchestrator | Thursday 29 May 2025 01:03:18 +0000 (0:00:00.098) 0:02:03.872 ********** 2025-05-29 01:04:38.202418 | orchestrator | changed: [testbed-node-0] 2025-05-29 01:04:38.202437 | orchestrator | changed: [testbed-node-1] 2025-05-29 01:04:38.202456 | orchestrator | changed: [testbed-node-2] 2025-05-29 01:04:38.202475 | orchestrator | 2025-05-29 01:04:38.202495 | orchestrator | RUNNING HANDLER [designate : Restart designate-api container] ****************** 2025-05-29 01:04:38.202514 | orchestrator | Thursday 29 May 2025 01:03:33 +0000 (0:00:14.939) 0:02:18.811 ********** 2025-05-29 01:04:38.202530 | orchestrator | changed: [testbed-node-0] 2025-05-29 01:04:38.202548 | orchestrator | changed: [testbed-node-2] 2025-05-29 01:04:38.202567 | orchestrator | changed: [testbed-node-1] 2025-05-29 01:04:38.202586 | orchestrator | 2025-05-29 01:04:38.202606 | orchestrator | RUNNING HANDLER [designate : Restart designate-central container] ************** 2025-05-29 01:04:38.202625 | orchestrator | Thursday 29 May 2025 01:03:45 +0000 (0:00:11.990) 0:02:30.802 ********** 2025-05-29 01:04:38.202644 | orchestrator | changed: [testbed-node-0] 2025-05-29 01:04:38.202663 | orchestrator | changed: [testbed-node-2] 2025-05-29 01:04:38.202682 | orchestrator | changed: [testbed-node-1] 2025-05-29 01:04:38.202701 | orchestrator | 2025-05-29 01:04:38.202720 | orchestrator | RUNNING HANDLER [designate : Restart designate-producer container] ************* 2025-05-29 01:04:38.202740 | orchestrator | Thursday 29 May 2025 01:03:58 +0000 (0:00:12.491) 0:02:43.293 ********** 2025-05-29 01:04:38.202759 | orchestrator | changed: [testbed-node-1] 2025-05-29 01:04:38.202778 | orchestrator | changed: [testbed-node-2] 2025-05-29 01:04:38.202797 | orchestrator | changed: [testbed-node-0] 2025-05-29 01:04:38.202815 | orchestrator | 2025-05-29 01:04:38.202835 | orchestrator | RUNNING HANDLER [designate : Restart designate-mdns container] ***************** 2025-05-29 01:04:38.202853 | orchestrator | Thursday 29 May 2025 01:04:09 +0000 (0:00:11.501) 0:02:54.794 ********** 2025-05-29 01:04:38.202873 | orchestrator | changed: [testbed-node-0] 2025-05-29 01:04:38.202892 | orchestrator | changed: [testbed-node-1] 2025-05-29 01:04:38.202911 | orchestrator | changed: [testbed-node-2] 2025-05-29 01:04:38.202965 | orchestrator | 2025-05-29 01:04:38.202987 | orchestrator | RUNNING HANDLER [designate : Restart designate-worker container] *************** 2025-05-29 01:04:38.203006 | orchestrator | Thursday 29 May 2025 01:04:17 +0000 (0:00:07.521) 0:03:02.316 ********** 2025-05-29 01:04:38.203026 | orchestrator | changed: [testbed-node-2] 2025-05-29 01:04:38.203045 | orchestrator | changed: [testbed-node-0] 2025-05-29 01:04:38.203064 | orchestrator | changed: [testbed-node-1] 2025-05-29 01:04:38.203082 | orchestrator | 2025-05-29 01:04:38.203101 | orchestrator | TASK [designate : Non-destructive DNS pools update] **************************** 2025-05-29 01:04:38.203121 | orchestrator | Thursday 29 May 2025 01:04:32 +0000 (0:00:15.621) 0:03:17.937 ********** 2025-05-29 01:04:38.203141 | orchestrator | changed: [testbed-node-0] 2025-05-29 01:04:38.203159 | orchestrator | 2025-05-29 01:04:38.203178 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-29 01:04:38.203223 | orchestrator | testbed-node-0 : ok=29  changed=24  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-05-29 01:04:38.203244 | orchestrator | testbed-node-1 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-29 01:04:38.203262 | orchestrator | testbed-node-2 : ok=19  changed=15  unreachable=0 failed=0 skipped=6  rescued=0 ignored=0 2025-05-29 01:04:38.203281 | orchestrator | 2025-05-29 01:04:38.203298 | orchestrator | 2025-05-29 01:04:38.203316 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-29 01:04:38.203334 | orchestrator | Thursday 29 May 2025 01:04:37 +0000 (0:00:05.037) 0:03:22.974 ********** 2025-05-29 01:04:38.203352 | orchestrator | =============================================================================== 2025-05-29 01:04:38.203386 | orchestrator | designate : Copying over designate.conf -------------------------------- 26.12s 2025-05-29 01:04:38.203404 | orchestrator | designate : Restart designate-worker container ------------------------- 15.62s 2025-05-29 01:04:38.203421 | orchestrator | designate : Running Designate bootstrap container ---------------------- 15.33s 2025-05-29 01:04:38.203439 | orchestrator | designate : Restart designate-backend-bind9 container ------------------ 14.94s 2025-05-29 01:04:38.203456 | orchestrator | designate : Restart designate-central container ------------------------ 12.49s 2025-05-29 01:04:38.203474 | orchestrator | designate : Restart designate-api container ---------------------------- 11.99s 2025-05-29 01:04:38.203491 | orchestrator | designate : Restart designate-producer container ----------------------- 11.50s 2025-05-29 01:04:38.203508 | orchestrator | designate : Restart designate-mdns container ---------------------------- 7.52s 2025-05-29 01:04:38.203527 | orchestrator | designate : Copying over pools.yaml ------------------------------------- 7.45s 2025-05-29 01:04:38.203544 | orchestrator | service-ks-register : designate | Creating endpoints -------------------- 6.58s 2025-05-29 01:04:38.203561 | orchestrator | designate : Copying over config.json files for services ----------------- 6.37s 2025-05-29 01:04:38.203578 | orchestrator | service-cert-copy : designate | Copying over extra CA certificates ------ 6.22s 2025-05-29 01:04:38.203604 | orchestrator | designate : Copying over named.conf ------------------------------------- 5.55s 2025-05-29 01:04:38.203623 | orchestrator | designate : Check designate containers ---------------------------------- 5.19s 2025-05-29 01:04:38.203642 | orchestrator | designate : Non-destructive DNS pools update ---------------------------- 5.04s 2025-05-29 01:04:38.203660 | orchestrator | designate : Copying over rndc.conf -------------------------------------- 4.76s 2025-05-29 01:04:38.203678 | orchestrator | service-ks-register : designate | Granting user roles ------------------- 4.19s 2025-05-29 01:04:38.203711 | orchestrator | service-ks-register : designate | Creating services --------------------- 3.82s 2025-05-29 01:04:38.203729 | orchestrator | service-ks-register : designate | Creating users ------------------------ 3.82s 2025-05-29 01:04:38.203747 | orchestrator | service-ks-register : designate | Creating projects --------------------- 3.44s 2025-05-29 01:04:38.203766 | orchestrator | 2025-05-29 01:04:38 | INFO  | Task c5a4286f-cc55-400d-a696-dbd74f6afb4d is in state STARTED 2025-05-29 01:04:38.203785 | orchestrator | 2025-05-29 01:04:38 | INFO  | Task 714be9b6-46c8-4528-8b44-60523f1aad38 is in state STARTED 2025-05-29 01:04:38.203804 | orchestrator | 2025-05-29 01:04:38 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:04:38.203823 | orchestrator | 2025-05-29 01:04:38 | INFO  | Task 222c4d4a-bc6d-472e-867f-3f5b90e5bb15 is in state STARTED 2025-05-29 01:04:38.203840 | orchestrator | 2025-05-29 01:04:38 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:04:41.247354 | orchestrator | 2025-05-29 01:04:41 | INFO  | Task c5a4286f-cc55-400d-a696-dbd74f6afb4d is in state STARTED 2025-05-29 01:04:41.248649 | orchestrator | 2025-05-29 01:04:41 | INFO  | Task b3328779-58f9-4299-9e56-62c0df03259b is in state STARTED 2025-05-29 01:04:41.250619 | orchestrator | 2025-05-29 01:04:41 | INFO  | Task 714be9b6-46c8-4528-8b44-60523f1aad38 is in state STARTED 2025-05-29 01:04:41.252031 | orchestrator | 2025-05-29 01:04:41 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:04:41.253469 | orchestrator | 2025-05-29 01:04:41 | INFO  | Task 222c4d4a-bc6d-472e-867f-3f5b90e5bb15 is in state STARTED 2025-05-29 01:04:41.253511 | orchestrator | 2025-05-29 01:04:41 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:04:44.302641 | orchestrator | 2025-05-29 01:04:44 | INFO  | Task c5a4286f-cc55-400d-a696-dbd74f6afb4d is in state STARTED 2025-05-29 01:04:44.302744 | orchestrator | 2025-05-29 01:04:44 | INFO  | Task b3328779-58f9-4299-9e56-62c0df03259b is in state STARTED 2025-05-29 01:04:44.303825 | orchestrator | 2025-05-29 01:04:44 | INFO  | Task 714be9b6-46c8-4528-8b44-60523f1aad38 is in state STARTED 2025-05-29 01:04:44.305158 | orchestrator | 2025-05-29 01:04:44 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:04:44.307029 | orchestrator | 2025-05-29 01:04:44 | INFO  | Task 222c4d4a-bc6d-472e-867f-3f5b90e5bb15 is in state STARTED 2025-05-29 01:04:44.307056 | orchestrator | 2025-05-29 01:04:44 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:04:47.360869 | orchestrator | 2025-05-29 01:04:47 | INFO  | Task c5a4286f-cc55-400d-a696-dbd74f6afb4d is in state STARTED 2025-05-29 01:04:47.362246 | orchestrator | 2025-05-29 01:04:47 | INFO  | Task b3328779-58f9-4299-9e56-62c0df03259b is in state STARTED 2025-05-29 01:04:47.363625 | orchestrator | 2025-05-29 01:04:47 | INFO  | Task 714be9b6-46c8-4528-8b44-60523f1aad38 is in state STARTED 2025-05-29 01:04:47.365136 | orchestrator | 2025-05-29 01:04:47 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:04:47.366558 | orchestrator | 2025-05-29 01:04:47 | INFO  | Task 222c4d4a-bc6d-472e-867f-3f5b90e5bb15 is in state STARTED 2025-05-29 01:04:47.366583 | orchestrator | 2025-05-29 01:04:47 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:04:50.419607 | orchestrator | 2025-05-29 01:04:50 | INFO  | Task c5a4286f-cc55-400d-a696-dbd74f6afb4d is in state STARTED 2025-05-29 01:04:50.420288 | orchestrator | 2025-05-29 01:04:50 | INFO  | Task b3328779-58f9-4299-9e56-62c0df03259b is in state STARTED 2025-05-29 01:04:50.422147 | orchestrator | 2025-05-29 01:04:50 | INFO  | Task 714be9b6-46c8-4528-8b44-60523f1aad38 is in state STARTED 2025-05-29 01:04:50.423518 | orchestrator | 2025-05-29 01:04:50 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:04:50.425040 | orchestrator | 2025-05-29 01:04:50 | INFO  | Task 222c4d4a-bc6d-472e-867f-3f5b90e5bb15 is in state STARTED 2025-05-29 01:04:50.425327 | orchestrator | 2025-05-29 01:04:50 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:04:53.477824 | orchestrator | 2025-05-29 01:04:53 | INFO  | Task c5a4286f-cc55-400d-a696-dbd74f6afb4d is in state STARTED 2025-05-29 01:04:53.478209 | orchestrator | 2025-05-29 01:04:53 | INFO  | Task b3328779-58f9-4299-9e56-62c0df03259b is in state STARTED 2025-05-29 01:04:53.479172 | orchestrator | 2025-05-29 01:04:53 | INFO  | Task 714be9b6-46c8-4528-8b44-60523f1aad38 is in state STARTED 2025-05-29 01:04:53.480584 | orchestrator | 2025-05-29 01:04:53 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:04:53.481568 | orchestrator | 2025-05-29 01:04:53 | INFO  | Task 222c4d4a-bc6d-472e-867f-3f5b90e5bb15 is in state STARTED 2025-05-29 01:04:53.481682 | orchestrator | 2025-05-29 01:04:53 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:04:56.531252 | orchestrator | 2025-05-29 01:04:56 | INFO  | Task c5a4286f-cc55-400d-a696-dbd74f6afb4d is in state STARTED 2025-05-29 01:04:56.531355 | orchestrator | 2025-05-29 01:04:56 | INFO  | Task b3328779-58f9-4299-9e56-62c0df03259b is in state STARTED 2025-05-29 01:04:56.531370 | orchestrator | 2025-05-29 01:04:56 | INFO  | Task 714be9b6-46c8-4528-8b44-60523f1aad38 is in state STARTED 2025-05-29 01:04:56.531750 | orchestrator | 2025-05-29 01:04:56 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:04:56.536489 | orchestrator | 2025-05-29 01:04:56 | INFO  | Task 222c4d4a-bc6d-472e-867f-3f5b90e5bb15 is in state STARTED 2025-05-29 01:04:56.536572 | orchestrator | 2025-05-29 01:04:56 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:04:59.595102 | orchestrator | 2025-05-29 01:04:59 | INFO  | Task c5a4286f-cc55-400d-a696-dbd74f6afb4d is in state STARTED 2025-05-29 01:04:59.599687 | orchestrator | 2025-05-29 01:04:59 | INFO  | Task b3328779-58f9-4299-9e56-62c0df03259b is in state STARTED 2025-05-29 01:04:59.602153 | orchestrator | 2025-05-29 01:04:59 | INFO  | Task 714be9b6-46c8-4528-8b44-60523f1aad38 is in state STARTED 2025-05-29 01:04:59.604424 | orchestrator | 2025-05-29 01:04:59 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:04:59.606953 | orchestrator | 2025-05-29 01:04:59 | INFO  | Task 222c4d4a-bc6d-472e-867f-3f5b90e5bb15 is in state STARTED 2025-05-29 01:04:59.607001 | orchestrator | 2025-05-29 01:04:59 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:05:02.660515 | orchestrator | 2025-05-29 01:05:02 | INFO  | Task c5a4286f-cc55-400d-a696-dbd74f6afb4d is in state STARTED 2025-05-29 01:05:02.662765 | orchestrator | 2025-05-29 01:05:02 | INFO  | Task b3328779-58f9-4299-9e56-62c0df03259b is in state STARTED 2025-05-29 01:05:02.662800 | orchestrator | 2025-05-29 01:05:02 | INFO  | Task 714be9b6-46c8-4528-8b44-60523f1aad38 is in state STARTED 2025-05-29 01:05:02.663275 | orchestrator | 2025-05-29 01:05:02 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:05:02.665198 | orchestrator | 2025-05-29 01:05:02 | INFO  | Task 222c4d4a-bc6d-472e-867f-3f5b90e5bb15 is in state STARTED 2025-05-29 01:05:02.665221 | orchestrator | 2025-05-29 01:05:02 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:05:05.693094 | orchestrator | 2025-05-29 01:05:05 | INFO  | Task c5a4286f-cc55-400d-a696-dbd74f6afb4d is in state STARTED 2025-05-29 01:05:05.693728 | orchestrator | 2025-05-29 01:05:05 | INFO  | Task b3328779-58f9-4299-9e56-62c0df03259b is in state STARTED 2025-05-29 01:05:05.694492 | orchestrator | 2025-05-29 01:05:05 | INFO  | Task 714be9b6-46c8-4528-8b44-60523f1aad38 is in state STARTED 2025-05-29 01:05:05.695142 | orchestrator | 2025-05-29 01:05:05 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:05:05.695769 | orchestrator | 2025-05-29 01:05:05 | INFO  | Task 222c4d4a-bc6d-472e-867f-3f5b90e5bb15 is in state STARTED 2025-05-29 01:05:05.695791 | orchestrator | 2025-05-29 01:05:05 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:05:08.724330 | orchestrator | 2025-05-29 01:05:08 | INFO  | Task c5a4286f-cc55-400d-a696-dbd74f6afb4d is in state STARTED 2025-05-29 01:05:08.724424 | orchestrator | 2025-05-29 01:05:08 | INFO  | Task b3328779-58f9-4299-9e56-62c0df03259b is in state STARTED 2025-05-29 01:05:08.724974 | orchestrator | 2025-05-29 01:05:08 | INFO  | Task 714be9b6-46c8-4528-8b44-60523f1aad38 is in state STARTED 2025-05-29 01:05:08.726951 | orchestrator | 2025-05-29 01:05:08 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:05:08.728054 | orchestrator | 2025-05-29 01:05:08 | INFO  | Task 222c4d4a-bc6d-472e-867f-3f5b90e5bb15 is in state STARTED 2025-05-29 01:05:08.728080 | orchestrator | 2025-05-29 01:05:08 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:05:11.764747 | orchestrator | 2025-05-29 01:05:11 | INFO  | Task c5a4286f-cc55-400d-a696-dbd74f6afb4d is in state STARTED 2025-05-29 01:05:11.767183 | orchestrator | 2025-05-29 01:05:11 | INFO  | Task b3328779-58f9-4299-9e56-62c0df03259b is in state STARTED 2025-05-29 01:05:11.770148 | orchestrator | 2025-05-29 01:05:11 | INFO  | Task 714be9b6-46c8-4528-8b44-60523f1aad38 is in state STARTED 2025-05-29 01:05:11.773766 | orchestrator | 2025-05-29 01:05:11 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:05:11.774266 | orchestrator | 2025-05-29 01:05:11 | INFO  | Task 222c4d4a-bc6d-472e-867f-3f5b90e5bb15 is in state STARTED 2025-05-29 01:05:11.774324 | orchestrator | 2025-05-29 01:05:11 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:05:14.805179 | orchestrator | 2025-05-29 01:05:14 | INFO  | Task c5a4286f-cc55-400d-a696-dbd74f6afb4d is in state STARTED 2025-05-29 01:05:14.805269 | orchestrator | 2025-05-29 01:05:14 | INFO  | Task b3328779-58f9-4299-9e56-62c0df03259b is in state SUCCESS 2025-05-29 01:05:14.806198 | orchestrator | 2025-05-29 01:05:14 | INFO  | Task b0bf4f90-1ccd-4fbc-84c3-bf396bf3b873 is in state STARTED 2025-05-29 01:05:14.807210 | orchestrator | 2025-05-29 01:05:14 | INFO  | Task 714be9b6-46c8-4528-8b44-60523f1aad38 is in state STARTED 2025-05-29 01:05:14.807530 | orchestrator | 2025-05-29 01:05:14 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:05:14.808641 | orchestrator | 2025-05-29 01:05:14 | INFO  | Task 222c4d4a-bc6d-472e-867f-3f5b90e5bb15 is in state STARTED 2025-05-29 01:05:14.808664 | orchestrator | 2025-05-29 01:05:14 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:05:17.836562 | orchestrator | 2025-05-29 01:05:17 | INFO  | Task c5a4286f-cc55-400d-a696-dbd74f6afb4d is in state STARTED 2025-05-29 01:05:17.836803 | orchestrator | 2025-05-29 01:05:17 | INFO  | Task b0bf4f90-1ccd-4fbc-84c3-bf396bf3b873 is in state STARTED 2025-05-29 01:05:17.836836 | orchestrator | 2025-05-29 01:05:17 | INFO  | Task 714be9b6-46c8-4528-8b44-60523f1aad38 is in state STARTED 2025-05-29 01:05:17.837460 | orchestrator | 2025-05-29 01:05:17 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:05:17.838080 | orchestrator | 2025-05-29 01:05:17 | INFO  | Task 222c4d4a-bc6d-472e-867f-3f5b90e5bb15 is in state STARTED 2025-05-29 01:05:17.838112 | orchestrator | 2025-05-29 01:05:17 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:05:20.870678 | orchestrator | 2025-05-29 01:05:20 | INFO  | Task c5a4286f-cc55-400d-a696-dbd74f6afb4d is in state STARTED 2025-05-29 01:05:20.870810 | orchestrator | 2025-05-29 01:05:20 | INFO  | Task b0bf4f90-1ccd-4fbc-84c3-bf396bf3b873 is in state STARTED 2025-05-29 01:05:20.872473 | orchestrator | 2025-05-29 01:05:20 | INFO  | Task 714be9b6-46c8-4528-8b44-60523f1aad38 is in state STARTED 2025-05-29 01:05:20.872535 | orchestrator | 2025-05-29 01:05:20 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:05:20.873072 | orchestrator | 2025-05-29 01:05:20 | INFO  | Task 222c4d4a-bc6d-472e-867f-3f5b90e5bb15 is in state STARTED 2025-05-29 01:05:20.873118 | orchestrator | 2025-05-29 01:05:20 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:05:23.913498 | orchestrator | 2025-05-29 01:05:23 | INFO  | Task c5a4286f-cc55-400d-a696-dbd74f6afb4d is in state STARTED 2025-05-29 01:05:23.914078 | orchestrator | 2025-05-29 01:05:23 | INFO  | Task b0bf4f90-1ccd-4fbc-84c3-bf396bf3b873 is in state STARTED 2025-05-29 01:05:23.915370 | orchestrator | 2025-05-29 01:05:23 | INFO  | Task 714be9b6-46c8-4528-8b44-60523f1aad38 is in state STARTED 2025-05-29 01:05:23.916772 | orchestrator | 2025-05-29 01:05:23 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:05:23.920185 | orchestrator | 2025-05-29 01:05:23 | INFO  | Task 222c4d4a-bc6d-472e-867f-3f5b90e5bb15 is in state STARTED 2025-05-29 01:05:23.920212 | orchestrator | 2025-05-29 01:05:23 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:05:26.953557 | orchestrator | 2025-05-29 01:05:26 | INFO  | Task c5a4286f-cc55-400d-a696-dbd74f6afb4d is in state STARTED 2025-05-29 01:05:26.953657 | orchestrator | 2025-05-29 01:05:26 | INFO  | Task b0bf4f90-1ccd-4fbc-84c3-bf396bf3b873 is in state STARTED 2025-05-29 01:05:26.954212 | orchestrator | 2025-05-29 01:05:26 | INFO  | Task 714be9b6-46c8-4528-8b44-60523f1aad38 is in state STARTED 2025-05-29 01:05:26.954693 | orchestrator | 2025-05-29 01:05:26 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:05:26.956067 | orchestrator | 2025-05-29 01:05:26 | INFO  | Task 222c4d4a-bc6d-472e-867f-3f5b90e5bb15 is in state STARTED 2025-05-29 01:05:26.956092 | orchestrator | 2025-05-29 01:05:26 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:05:29.983607 | orchestrator | 2025-05-29 01:05:29.983708 | orchestrator | 2025-05-29 01:05:29.983803 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-29 01:05:29.983819 | orchestrator | 2025-05-29 01:05:29.983829 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-29 01:05:29.983840 | orchestrator | Thursday 29 May 2025 01:04:40 +0000 (0:00:00.243) 0:00:00.243 ********** 2025-05-29 01:05:29.983878 | orchestrator | ok: [testbed-node-0] 2025-05-29 01:05:29.983890 | orchestrator | ok: [testbed-node-1] 2025-05-29 01:05:29.983942 | orchestrator | ok: [testbed-node-2] 2025-05-29 01:05:29.983952 | orchestrator | ok: [testbed-manager] 2025-05-29 01:05:29.983962 | orchestrator | ok: [testbed-node-3] 2025-05-29 01:05:29.983972 | orchestrator | ok: [testbed-node-4] 2025-05-29 01:05:29.983981 | orchestrator | ok: [testbed-node-5] 2025-05-29 01:05:29.983991 | orchestrator | 2025-05-29 01:05:29.984001 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-29 01:05:29.984011 | orchestrator | Thursday 29 May 2025 01:04:41 +0000 (0:00:00.725) 0:00:00.968 ********** 2025-05-29 01:05:29.984022 | orchestrator | ok: [testbed-node-0] => (item=enable_ceph_rgw_True) 2025-05-29 01:05:29.984032 | orchestrator | ok: [testbed-node-1] => (item=enable_ceph_rgw_True) 2025-05-29 01:05:29.984042 | orchestrator | ok: [testbed-node-2] => (item=enable_ceph_rgw_True) 2025-05-29 01:05:29.984052 | orchestrator | ok: [testbed-manager] => (item=enable_ceph_rgw_True) 2025-05-29 01:05:29.984062 | orchestrator | ok: [testbed-node-3] => (item=enable_ceph_rgw_True) 2025-05-29 01:05:29.984071 | orchestrator | ok: [testbed-node-4] => (item=enable_ceph_rgw_True) 2025-05-29 01:05:29.984081 | orchestrator | ok: [testbed-node-5] => (item=enable_ceph_rgw_True) 2025-05-29 01:05:29.984091 | orchestrator | 2025-05-29 01:05:29.984101 | orchestrator | PLAY [Apply role ceph-rgw] ***************************************************** 2025-05-29 01:05:29.984110 | orchestrator | 2025-05-29 01:05:29.984120 | orchestrator | TASK [ceph-rgw : include_tasks] ************************************************ 2025-05-29 01:05:29.984131 | orchestrator | Thursday 29 May 2025 01:04:41 +0000 (0:00:00.795) 0:00:01.764 ********** 2025-05-29 01:05:29.984142 | orchestrator | included: /ansible/roles/ceph-rgw/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-manager, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-29 01:05:29.984153 | orchestrator | 2025-05-29 01:05:29.984163 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating services] ********************** 2025-05-29 01:05:29.984196 | orchestrator | Thursday 29 May 2025 01:04:43 +0000 (0:00:01.444) 0:00:03.209 ********** 2025-05-29 01:05:29.984207 | orchestrator | changed: [testbed-node-0] => (item=swift (object-store)) 2025-05-29 01:05:29.984217 | orchestrator | 2025-05-29 01:05:29.984227 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating endpoints] ********************* 2025-05-29 01:05:29.984237 | orchestrator | Thursday 29 May 2025 01:04:46 +0000 (0:00:03.357) 0:00:06.566 ********** 2025-05-29 01:05:29.984247 | orchestrator | changed: [testbed-node-0] => (item=swift -> https://api-int.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> internal) 2025-05-29 01:05:29.984259 | orchestrator | changed: [testbed-node-0] => (item=swift -> https://api.testbed.osism.xyz:6780/swift/v1/AUTH_%(project_id)s -> public) 2025-05-29 01:05:29.984269 | orchestrator | 2025-05-29 01:05:29.984279 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating projects] ********************** 2025-05-29 01:05:29.984289 | orchestrator | Thursday 29 May 2025 01:04:53 +0000 (0:00:06.660) 0:00:13.227 ********** 2025-05-29 01:05:29.984321 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-05-29 01:05:29.984331 | orchestrator | 2025-05-29 01:05:29.984341 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating users] ************************* 2025-05-29 01:05:29.984351 | orchestrator | Thursday 29 May 2025 01:04:56 +0000 (0:00:03.413) 0:00:16.640 ********** 2025-05-29 01:05:29.984361 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-29 01:05:29.984371 | orchestrator | changed: [testbed-node-0] => (item=ceph_rgw -> service) 2025-05-29 01:05:29.984380 | orchestrator | 2025-05-29 01:05:29.984390 | orchestrator | TASK [service-ks-register : ceph-rgw | Creating roles] ************************* 2025-05-29 01:05:29.984400 | orchestrator | Thursday 29 May 2025 01:05:00 +0000 (0:00:03.944) 0:00:20.585 ********** 2025-05-29 01:05:29.984409 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-05-29 01:05:29.984419 | orchestrator | changed: [testbed-node-0] => (item=ResellerAdmin) 2025-05-29 01:05:29.984429 | orchestrator | 2025-05-29 01:05:29.984439 | orchestrator | TASK [service-ks-register : ceph-rgw | Granting user roles] ******************** 2025-05-29 01:05:29.984448 | orchestrator | Thursday 29 May 2025 01:05:06 +0000 (0:00:06.158) 0:00:26.743 ********** 2025-05-29 01:05:29.984458 | orchestrator | changed: [testbed-node-0] => (item=ceph_rgw -> service -> admin) 2025-05-29 01:05:29.984467 | orchestrator | 2025-05-29 01:05:29.984477 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-29 01:05:29.984487 | orchestrator | testbed-manager : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-29 01:05:29.984497 | orchestrator | testbed-node-0 : ok=9  changed=5  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-29 01:05:29.984508 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-29 01:05:29.984517 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-29 01:05:29.984541 | orchestrator | testbed-node-3 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-29 01:05:29.984567 | orchestrator | testbed-node-4 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-29 01:05:29.984578 | orchestrator | testbed-node-5 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-29 01:05:29.984588 | orchestrator | 2025-05-29 01:05:29.984597 | orchestrator | 2025-05-29 01:05:29.984607 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-29 01:05:29.984617 | orchestrator | Thursday 29 May 2025 01:05:11 +0000 (0:00:04.772) 0:00:31.515 ********** 2025-05-29 01:05:29.984627 | orchestrator | =============================================================================== 2025-05-29 01:05:29.984636 | orchestrator | service-ks-register : ceph-rgw | Creating endpoints --------------------- 6.66s 2025-05-29 01:05:29.984646 | orchestrator | service-ks-register : ceph-rgw | Creating roles ------------------------- 6.16s 2025-05-29 01:05:29.984655 | orchestrator | service-ks-register : ceph-rgw | Granting user roles -------------------- 4.77s 2025-05-29 01:05:29.984665 | orchestrator | service-ks-register : ceph-rgw | Creating users ------------------------- 3.94s 2025-05-29 01:05:29.984674 | orchestrator | service-ks-register : ceph-rgw | Creating projects ---------------------- 3.41s 2025-05-29 01:05:29.984684 | orchestrator | service-ks-register : ceph-rgw | Creating services ---------------------- 3.36s 2025-05-29 01:05:29.984693 | orchestrator | ceph-rgw : include_tasks ------------------------------------------------ 1.44s 2025-05-29 01:05:29.984703 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.80s 2025-05-29 01:05:29.984713 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.73s 2025-05-29 01:05:29.984722 | orchestrator | 2025-05-29 01:05:29.984732 | orchestrator | 2025-05-29 01:05:29.984747 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-29 01:05:29.984757 | orchestrator | 2025-05-29 01:05:29.984767 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-29 01:05:29.984776 | orchestrator | Thursday 29 May 2025 01:03:24 +0000 (0:00:00.295) 0:00:00.295 ********** 2025-05-29 01:05:29.984786 | orchestrator | ok: [testbed-node-0] 2025-05-29 01:05:29.984796 | orchestrator | ok: [testbed-node-1] 2025-05-29 01:05:29.984805 | orchestrator | ok: [testbed-node-2] 2025-05-29 01:05:29.984815 | orchestrator | 2025-05-29 01:05:29.984824 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-29 01:05:29.984834 | orchestrator | Thursday 29 May 2025 01:03:25 +0000 (0:00:00.651) 0:00:00.947 ********** 2025-05-29 01:05:29.984843 | orchestrator | ok: [testbed-node-0] => (item=enable_magnum_True) 2025-05-29 01:05:29.984853 | orchestrator | ok: [testbed-node-1] => (item=enable_magnum_True) 2025-05-29 01:05:29.984862 | orchestrator | ok: [testbed-node-2] => (item=enable_magnum_True) 2025-05-29 01:05:29.984872 | orchestrator | 2025-05-29 01:05:29.984882 | orchestrator | PLAY [Apply role magnum] ******************************************************* 2025-05-29 01:05:29.984906 | orchestrator | 2025-05-29 01:05:29.984917 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-05-29 01:05:29.984926 | orchestrator | Thursday 29 May 2025 01:03:25 +0000 (0:00:00.354) 0:00:01.301 ********** 2025-05-29 01:05:29.984936 | orchestrator | included: /ansible/roles/magnum/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-29 01:05:29.984945 | orchestrator | 2025-05-29 01:05:29.984955 | orchestrator | TASK [service-ks-register : magnum | Creating services] ************************ 2025-05-29 01:05:29.984964 | orchestrator | Thursday 29 May 2025 01:03:27 +0000 (0:00:01.336) 0:00:02.637 ********** 2025-05-29 01:05:29.984974 | orchestrator | changed: [testbed-node-0] => (item=magnum (container-infra)) 2025-05-29 01:05:29.984983 | orchestrator | 2025-05-29 01:05:29.984993 | orchestrator | TASK [service-ks-register : magnum | Creating endpoints] *********************** 2025-05-29 01:05:29.985002 | orchestrator | Thursday 29 May 2025 01:03:31 +0000 (0:00:03.774) 0:00:06.412 ********** 2025-05-29 01:05:29.985012 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api-int.testbed.osism.xyz:9511/v1 -> internal) 2025-05-29 01:05:29.985022 | orchestrator | changed: [testbed-node-0] => (item=magnum -> https://api.testbed.osism.xyz:9511/v1 -> public) 2025-05-29 01:05:29.985031 | orchestrator | 2025-05-29 01:05:29.985041 | orchestrator | TASK [service-ks-register : magnum | Creating projects] ************************ 2025-05-29 01:05:29.985050 | orchestrator | Thursday 29 May 2025 01:03:37 +0000 (0:00:06.359) 0:00:12.771 ********** 2025-05-29 01:05:29.985060 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-05-29 01:05:29.985069 | orchestrator | 2025-05-29 01:05:29.985079 | orchestrator | TASK [service-ks-register : magnum | Creating users] *************************** 2025-05-29 01:05:29.985088 | orchestrator | Thursday 29 May 2025 01:03:40 +0000 (0:00:03.473) 0:00:16.245 ********** 2025-05-29 01:05:29.985098 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-29 01:05:29.985107 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service) 2025-05-29 01:05:29.985117 | orchestrator | 2025-05-29 01:05:29.985127 | orchestrator | TASK [service-ks-register : magnum | Creating roles] *************************** 2025-05-29 01:05:29.985136 | orchestrator | Thursday 29 May 2025 01:03:44 +0000 (0:00:03.837) 0:00:20.082 ********** 2025-05-29 01:05:29.985146 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-05-29 01:05:29.985156 | orchestrator | 2025-05-29 01:05:29.985165 | orchestrator | TASK [service-ks-register : magnum | Granting user roles] ********************** 2025-05-29 01:05:29.985174 | orchestrator | Thursday 29 May 2025 01:03:47 +0000 (0:00:03.225) 0:00:23.308 ********** 2025-05-29 01:05:29.985184 | orchestrator | changed: [testbed-node-0] => (item=magnum -> service -> admin) 2025-05-29 01:05:29.985193 | orchestrator | 2025-05-29 01:05:29.985203 | orchestrator | TASK [magnum : Creating Magnum trustee domain] ********************************* 2025-05-29 01:05:29.985212 | orchestrator | Thursday 29 May 2025 01:03:52 +0000 (0:00:04.360) 0:00:27.669 ********** 2025-05-29 01:05:29.985233 | orchestrator | changed: [testbed-node-0] 2025-05-29 01:05:29.985243 | orchestrator | 2025-05-29 01:05:29.985252 | orchestrator | TASK [magnum : Creating Magnum trustee user] *********************************** 2025-05-29 01:05:29.985268 | orchestrator | Thursday 29 May 2025 01:03:55 +0000 (0:00:03.296) 0:00:30.965 ********** 2025-05-29 01:05:29.985278 | orchestrator | changed: [testbed-node-0] 2025-05-29 01:05:29.985287 | orchestrator | 2025-05-29 01:05:29.985297 | orchestrator | TASK [magnum : Creating Magnum trustee user role] ****************************** 2025-05-29 01:05:29.985306 | orchestrator | Thursday 29 May 2025 01:03:59 +0000 (0:00:04.207) 0:00:35.173 ********** 2025-05-29 01:05:29.985316 | orchestrator | changed: [testbed-node-0] 2025-05-29 01:05:29.985325 | orchestrator | 2025-05-29 01:05:29.985334 | orchestrator | TASK [magnum : Ensuring config directories exist] ****************************** 2025-05-29 01:05:29.985344 | orchestrator | Thursday 29 May 2025 01:04:03 +0000 (0:00:03.693) 0:00:38.867 ********** 2025-05-29 01:05:29.985356 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-29 01:05:29.985371 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-29 01:05:29.985382 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-29 01:05:29.985393 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-29 01:05:29.985422 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-29 01:05:29.985433 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-29 01:05:29.985443 | orchestrator | 2025-05-29 01:05:29.985453 | orchestrator | TASK [magnum : Check if policies shall be overwritten] ************************* 2025-05-29 01:05:29.985463 | orchestrator | Thursday 29 May 2025 01:04:06 +0000 (0:00:02.556) 0:00:41.423 ********** 2025-05-29 01:05:29.985472 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:05:29.985482 | orchestrator | 2025-05-29 01:05:29.985492 | orchestrator | TASK [magnum : Set magnum policy file] ***************************************** 2025-05-29 01:05:29.985501 | orchestrator | Thursday 29 May 2025 01:04:06 +0000 (0:00:00.566) 0:00:41.989 ********** 2025-05-29 01:05:29.985510 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:05:29.985520 | orchestrator | skipping: [testbed-node-1] 2025-05-29 01:05:29.985530 | orchestrator | skipping: [testbed-node-2] 2025-05-29 01:05:29.985539 | orchestrator | 2025-05-29 01:05:29.985549 | orchestrator | TASK [magnum : Check if kubeconfig file is supplied] *************************** 2025-05-29 01:05:29.985558 | orchestrator | Thursday 29 May 2025 01:04:07 +0000 (0:00:01.292) 0:00:43.282 ********** 2025-05-29 01:05:29.985568 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-29 01:05:29.985577 | orchestrator | 2025-05-29 01:05:29.985587 | orchestrator | TASK [magnum : Copying over kubeconfig file] *********************************** 2025-05-29 01:05:29.985596 | orchestrator | Thursday 29 May 2025 01:04:08 +0000 (0:00:00.740) 0:00:44.022 ********** 2025-05-29 01:05:29.985606 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-29 01:05:29.985623 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-29 01:05:29.985634 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:05:29.985657 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-29 01:05:29.985668 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-29 01:05:29.985678 | orchestrator | skipping: [testbed-node-1] 2025-05-29 01:05:29.985688 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-29 01:05:29.985699 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-29 01:05:29.985714 | orchestrator | skipping: [testbed-node-2] 2025-05-29 01:05:29.985724 | orchestrator | 2025-05-29 01:05:29.985734 | orchestrator | TASK [magnum : Set magnum kubeconfig file's path] ****************************** 2025-05-29 01:05:29.985743 | orchestrator | Thursday 29 May 2025 01:04:10 +0000 (0:00:01.863) 0:00:45.885 ********** 2025-05-29 01:05:29.985753 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:05:29.985763 | orchestrator | skipping: [testbed-node-1] 2025-05-29 01:05:29.985772 | orchestrator | skipping: [testbed-node-2] 2025-05-29 01:05:29.985782 | orchestrator | 2025-05-29 01:05:29.985791 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-05-29 01:05:29.985801 | orchestrator | Thursday 29 May 2025 01:04:11 +0000 (0:00:00.695) 0:00:46.581 ********** 2025-05-29 01:05:29.985810 | orchestrator | included: /ansible/roles/magnum/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-29 01:05:29.985820 | orchestrator | 2025-05-29 01:05:29.985830 | orchestrator | TASK [service-cert-copy : magnum | Copying over extra CA certificates] ********* 2025-05-29 01:05:29.985839 | orchestrator | Thursday 29 May 2025 01:04:12 +0000 (0:00:01.086) 0:00:47.668 ********** 2025-05-29 01:05:29.985860 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-29 01:05:29.985872 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-29 01:05:29.985882 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-29 01:05:29.985936 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-29 01:05:29.985958 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-29 01:05:29.985975 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-29 01:05:29.985986 | orchestrator | 2025-05-29 01:05:29.985996 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS certificate] *** 2025-05-29 01:05:29.986005 | orchestrator | Thursday 29 May 2025 01:04:15 +0000 (0:00:02.908) 0:00:50.577 ********** 2025-05-29 01:05:29.986068 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-29 01:05:29.986082 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-29 01:05:29.986099 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:05:29.986109 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-29 01:05:29.986132 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-29 01:05:29.986142 | orchestrator | skipping: [testbed-node-1] 2025-05-29 01:05:29.986152 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-29 01:05:29.986163 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-29 01:05:29.986179 | orchestrator | skipping: [testbed-node-2] 2025-05-29 01:05:29.986189 | orchestrator | 2025-05-29 01:05:29.986198 | orchestrator | TASK [service-cert-copy : magnum | Copying over backend internal TLS key] ****** 2025-05-29 01:05:29.986208 | orchestrator | Thursday 29 May 2025 01:04:17 +0000 (0:00:02.271) 0:00:52.849 ********** 2025-05-29 01:05:29.986218 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-29 01:05:29.986229 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-29 01:05:29.986239 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:05:29.986259 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezon2025-05-29 01:05:29 | INFO  | Task c5a4286f-cc55-400d-a696-dbd74f6afb4d is in state SUCCESS 2025-05-29 01:05:29.986616 | orchestrator | e:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-29 01:05:29.986735 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-29 01:05:29.986804 | orchestrator | skipping: [testbed-node-1] 2025-05-29 01:05:29.986830 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-29 01:05:29.986853 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-29 01:05:29.986873 | orchestrator | skipping: [testbed-node-2] 2025-05-29 01:05:29.986921 | orchestrator | 2025-05-29 01:05:29.986944 | orchestrator | TASK [magnum : Copying over config.json files for services] ******************** 2025-05-29 01:05:29.986964 | orchestrator | Thursday 29 May 2025 01:04:20 +0000 (0:00:03.044) 0:00:55.894 ********** 2025-05-29 01:05:29.987002 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-29 01:05:29.987052 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-29 01:05:29.987074 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-29 01:05:29.987108 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-29 01:05:29.987130 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-29 01:05:29.987159 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-29 01:05:29.987180 | orchestrator | 2025-05-29 01:05:29.987211 | orchestrator | TASK [magnum : Copying over magnum.conf] *************************************** 2025-05-29 01:05:29.987231 | orchestrator | Thursday 29 May 2025 01:04:24 +0000 (0:00:04.331) 0:01:00.225 ********** 2025-05-29 01:05:29.987251 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-29 01:05:29.987285 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-29 01:05:29.987306 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-29 01:05:29.987321 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-29 01:05:29.987346 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-29 01:05:29.987358 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-29 01:05:29.987377 | orchestrator | 2025-05-29 01:05:29.987388 | orchestrator | TASK [magnum : Copying over existing policy file] ****************************** 2025-05-29 01:05:29.987399 | orchestrator | Thursday 29 May 2025 01:04:34 +0000 (0:00:09.397) 0:01:09.622 ********** 2025-05-29 01:05:29.987411 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-29 01:05:29.987422 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-29 01:05:29.987434 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:05:29.987445 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-29 01:05:29.987469 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-29 01:05:29.987489 | orchestrator | skipping: [testbed-node-1] 2025-05-29 01:05:29.987510 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}})  2025-05-29 01:05:29.987529 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}})  2025-05-29 01:05:29.987549 | orchestrator | skipping: [testbed-node-2] 2025-05-29 01:05:29.987570 | orchestrator | 2025-05-29 01:05:29.987589 | orchestrator | TASK [magnum : Check magnum containers] **************************************** 2025-05-29 01:05:29.987608 | orchestrator | Thursday 29 May 2025 01:04:34 +0000 (0:00:00.618) 0:01:10.241 ********** 2025-05-29 01:05:29.987620 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-29 01:05:29.987666 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-29 01:05:29.987679 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-api', 'value': {'container_name': 'magnum_api', 'group': 'magnum-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-api:18.0.1.20241206', 'environment': {'DUMMY_ENVIRONMENT': 'kolla_useless_env'}, 'volumes': ['/etc/kolla/magnum-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9511'], 'timeout': '30'}, 'haproxy': {'magnum_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '9511', 'listen_port': '9511'}, 'magnum_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9511', 'listen_port': '9511'}}}}) 2025-05-29 01:05:29.987699 | orchestrator | changed: [testbed-node-0] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-29 01:05:29.987711 | orchestrator | changed: [testbed-node-1] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-29 01:05:29.987722 | orchestrator | changed: [testbed-node-2] => (item={'key': 'magnum-conductor', 'value': {'container_name': 'magnum_conductor', 'group': 'magnum-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/magnum-conductor:18.0.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'volumes': ['/etc/kolla/magnum-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'magnum:/var/lib/magnum/', '', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port magnum-conductor 5672'], 'timeout': '30'}}}) 2025-05-29 01:05:29.987733 | orchestrator | 2025-05-29 01:05:29.987744 | orchestrator | TASK [magnum : include_tasks] ************************************************** 2025-05-29 01:05:29.987755 | orchestrator | Thursday 29 May 2025 01:04:36 +0000 (0:00:02.113) 0:01:12.355 ********** 2025-05-29 01:05:29.987766 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:05:29.987777 | orchestrator | skipping: [testbed-node-1] 2025-05-29 01:05:29.987788 | orchestrator | skipping: [testbed-node-2] 2025-05-29 01:05:29.987799 | orchestrator | 2025-05-29 01:05:29.987810 | orchestrator | TASK [magnum : Creating Magnum database] *************************************** 2025-05-29 01:05:29.987820 | orchestrator | Thursday 29 May 2025 01:04:37 +0000 (0:00:00.203) 0:01:12.559 ********** 2025-05-29 01:05:29.987831 | orchestrator | changed: [testbed-node-0] 2025-05-29 01:05:29.987842 | orchestrator | 2025-05-29 01:05:29.987853 | orchestrator | TASK [magnum : Creating Magnum database user and setting permissions] ********** 2025-05-29 01:05:29.987875 | orchestrator | Thursday 29 May 2025 01:04:39 +0000 (0:00:02.182) 0:01:14.742 ********** 2025-05-29 01:05:29.987886 | orchestrator | changed: [testbed-node-0] 2025-05-29 01:05:29.987926 | orchestrator | 2025-05-29 01:05:29.987938 | orchestrator | TASK [magnum : Running Magnum bootstrap container] ***************************** 2025-05-29 01:05:29.987949 | orchestrator | Thursday 29 May 2025 01:04:41 +0000 (0:00:02.228) 0:01:16.970 ********** 2025-05-29 01:05:29.987960 | orchestrator | changed: [testbed-node-0] 2025-05-29 01:05:29.987985 | orchestrator | 2025-05-29 01:05:29.988003 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-05-29 01:05:29.988014 | orchestrator | Thursday 29 May 2025 01:04:59 +0000 (0:00:17.570) 0:01:34.540 ********** 2025-05-29 01:05:29.988025 | orchestrator | 2025-05-29 01:05:29.988036 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-05-29 01:05:29.988047 | orchestrator | Thursday 29 May 2025 01:04:59 +0000 (0:00:00.075) 0:01:34.615 ********** 2025-05-29 01:05:29.988058 | orchestrator | 2025-05-29 01:05:29.988069 | orchestrator | TASK [magnum : Flush handlers] ************************************************* 2025-05-29 01:05:29.988080 | orchestrator | Thursday 29 May 2025 01:04:59 +0000 (0:00:00.197) 0:01:34.813 ********** 2025-05-29 01:05:29.988091 | orchestrator | 2025-05-29 01:05:29.988102 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-api container] ************************ 2025-05-29 01:05:29.988126 | orchestrator | Thursday 29 May 2025 01:04:59 +0000 (0:00:00.062) 0:01:34.875 ********** 2025-05-29 01:05:29.988137 | orchestrator | changed: [testbed-node-0] 2025-05-29 01:05:29.988148 | orchestrator | changed: [testbed-node-1] 2025-05-29 01:05:29.988159 | orchestrator | changed: [testbed-node-2] 2025-05-29 01:05:29.988170 | orchestrator | 2025-05-29 01:05:29.988181 | orchestrator | RUNNING HANDLER [magnum : Restart magnum-conductor container] ****************** 2025-05-29 01:05:29.988191 | orchestrator | Thursday 29 May 2025 01:05:12 +0000 (0:00:13.486) 0:01:48.362 ********** 2025-05-29 01:05:29.988202 | orchestrator | changed: [testbed-node-0] 2025-05-29 01:05:29.988213 | orchestrator | changed: [testbed-node-1] 2025-05-29 01:05:29.988224 | orchestrator | changed: [testbed-node-2] 2025-05-29 01:05:29.988234 | orchestrator | 2025-05-29 01:05:29.988245 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-29 01:05:29.988257 | orchestrator | testbed-node-0 : ok=24  changed=17  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-05-29 01:05:29.988273 | orchestrator | testbed-node-1 : ok=11  changed=7  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-05-29 01:05:29.988292 | orchestrator | testbed-node-2 : ok=11  changed=7  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-05-29 01:05:29.988319 | orchestrator | 2025-05-29 01:05:29.988339 | orchestrator | 2025-05-29 01:05:29.988355 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-29 01:05:29.988371 | orchestrator | Thursday 29 May 2025 01:05:27 +0000 (0:00:14.996) 0:02:03.358 ********** 2025-05-29 01:05:29.988387 | orchestrator | =============================================================================== 2025-05-29 01:05:29.988404 | orchestrator | magnum : Running Magnum bootstrap container ---------------------------- 17.57s 2025-05-29 01:05:29.988420 | orchestrator | magnum : Restart magnum-conductor container ---------------------------- 15.00s 2025-05-29 01:05:29.988437 | orchestrator | magnum : Restart magnum-api container ---------------------------------- 13.49s 2025-05-29 01:05:29.988454 | orchestrator | magnum : Copying over magnum.conf --------------------------------------- 9.40s 2025-05-29 01:05:29.988472 | orchestrator | service-ks-register : magnum | Creating endpoints ----------------------- 6.36s 2025-05-29 01:05:29.988491 | orchestrator | service-ks-register : magnum | Granting user roles ---------------------- 4.36s 2025-05-29 01:05:29.988505 | orchestrator | magnum : Copying over config.json files for services -------------------- 4.33s 2025-05-29 01:05:29.988516 | orchestrator | magnum : Creating Magnum trustee user ----------------------------------- 4.21s 2025-05-29 01:05:29.988537 | orchestrator | service-ks-register : magnum | Creating users --------------------------- 3.84s 2025-05-29 01:05:29.988548 | orchestrator | service-ks-register : magnum | Creating services ------------------------ 3.77s 2025-05-29 01:05:29.988559 | orchestrator | magnum : Creating Magnum trustee user role ------------------------------ 3.69s 2025-05-29 01:05:29.988569 | orchestrator | service-ks-register : magnum | Creating projects ------------------------ 3.47s 2025-05-29 01:05:29.988580 | orchestrator | magnum : Creating Magnum trustee domain --------------------------------- 3.30s 2025-05-29 01:05:29.988591 | orchestrator | service-ks-register : magnum | Creating roles --------------------------- 3.23s 2025-05-29 01:05:29.988602 | orchestrator | service-cert-copy : magnum | Copying over backend internal TLS key ------ 3.05s 2025-05-29 01:05:29.988613 | orchestrator | service-cert-copy : magnum | Copying over extra CA certificates --------- 2.91s 2025-05-29 01:05:29.988624 | orchestrator | magnum : Ensuring config directories exist ------------------------------ 2.56s 2025-05-29 01:05:29.988635 | orchestrator | service-cert-copy : magnum | Copying over backend internal TLS certificate --- 2.27s 2025-05-29 01:05:29.988646 | orchestrator | magnum : Creating Magnum database user and setting permissions ---------- 2.23s 2025-05-29 01:05:29.988657 | orchestrator | magnum : Creating Magnum database --------------------------------------- 2.18s 2025-05-29 01:05:29.988668 | orchestrator | 2025-05-29 01:05:29 | INFO  | Task b0bf4f90-1ccd-4fbc-84c3-bf396bf3b873 is in state STARTED 2025-05-29 01:05:29.988679 | orchestrator | 2025-05-29 01:05:29 | INFO  | Task b04bb0db-c41d-423c-8f33-32174d7eae62 is in state STARTED 2025-05-29 01:05:29.988690 | orchestrator | 2025-05-29 01:05:29 | INFO  | Task 714be9b6-46c8-4528-8b44-60523f1aad38 is in state STARTED 2025-05-29 01:05:29.988714 | orchestrator | 2025-05-29 01:05:29 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:05:29.988732 | orchestrator | 2025-05-29 01:05:29 | INFO  | Task 222c4d4a-bc6d-472e-867f-3f5b90e5bb15 is in state STARTED 2025-05-29 01:05:29.988744 | orchestrator | 2025-05-29 01:05:29 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:05:33.021569 | orchestrator | 2025-05-29 01:05:33 | INFO  | Task b0bf4f90-1ccd-4fbc-84c3-bf396bf3b873 is in state STARTED 2025-05-29 01:05:33.021669 | orchestrator | 2025-05-29 01:05:33 | INFO  | Task b04bb0db-c41d-423c-8f33-32174d7eae62 is in state STARTED 2025-05-29 01:05:33.022188 | orchestrator | 2025-05-29 01:05:33 | INFO  | Task 714be9b6-46c8-4528-8b44-60523f1aad38 is in state STARTED 2025-05-29 01:05:33.024313 | orchestrator | 2025-05-29 01:05:33 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:05:33.024335 | orchestrator | 2025-05-29 01:05:33 | INFO  | Task 222c4d4a-bc6d-472e-867f-3f5b90e5bb15 is in state STARTED 2025-05-29 01:05:33.024347 | orchestrator | 2025-05-29 01:05:33 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:05:36.079321 | orchestrator | 2025-05-29 01:05:36 | INFO  | Task b0bf4f90-1ccd-4fbc-84c3-bf396bf3b873 is in state STARTED 2025-05-29 01:05:36.080299 | orchestrator | 2025-05-29 01:05:36 | INFO  | Task b04bb0db-c41d-423c-8f33-32174d7eae62 is in state STARTED 2025-05-29 01:05:36.081964 | orchestrator | 2025-05-29 01:05:36 | INFO  | Task 714be9b6-46c8-4528-8b44-60523f1aad38 is in state STARTED 2025-05-29 01:05:36.083131 | orchestrator | 2025-05-29 01:05:36 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:05:36.087667 | orchestrator | 2025-05-29 01:05:36 | INFO  | Task 222c4d4a-bc6d-472e-867f-3f5b90e5bb15 is in state STARTED 2025-05-29 01:05:36.087945 | orchestrator | 2025-05-29 01:05:36 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:05:39.138724 | orchestrator | 2025-05-29 01:05:39 | INFO  | Task b0bf4f90-1ccd-4fbc-84c3-bf396bf3b873 is in state STARTED 2025-05-29 01:05:39.139644 | orchestrator | 2025-05-29 01:05:39 | INFO  | Task b04bb0db-c41d-423c-8f33-32174d7eae62 is in state STARTED 2025-05-29 01:05:39.142626 | orchestrator | 2025-05-29 01:05:39 | INFO  | Task 714be9b6-46c8-4528-8b44-60523f1aad38 is in state STARTED 2025-05-29 01:05:39.143825 | orchestrator | 2025-05-29 01:05:39 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:05:39.145674 | orchestrator | 2025-05-29 01:05:39 | INFO  | Task 222c4d4a-bc6d-472e-867f-3f5b90e5bb15 is in state STARTED 2025-05-29 01:05:39.146098 | orchestrator | 2025-05-29 01:05:39 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:05:42.193585 | orchestrator | 2025-05-29 01:05:42 | INFO  | Task b0bf4f90-1ccd-4fbc-84c3-bf396bf3b873 is in state STARTED 2025-05-29 01:05:42.194528 | orchestrator | 2025-05-29 01:05:42 | INFO  | Task b04bb0db-c41d-423c-8f33-32174d7eae62 is in state STARTED 2025-05-29 01:05:42.194579 | orchestrator | 2025-05-29 01:05:42 | INFO  | Task 714be9b6-46c8-4528-8b44-60523f1aad38 is in state STARTED 2025-05-29 01:05:42.196214 | orchestrator | 2025-05-29 01:05:42 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:05:42.196840 | orchestrator | 2025-05-29 01:05:42 | INFO  | Task 222c4d4a-bc6d-472e-867f-3f5b90e5bb15 is in state STARTED 2025-05-29 01:05:42.196881 | orchestrator | 2025-05-29 01:05:42 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:05:45.262752 | orchestrator | 2025-05-29 01:05:45 | INFO  | Task b0bf4f90-1ccd-4fbc-84c3-bf396bf3b873 is in state STARTED 2025-05-29 01:05:45.262858 | orchestrator | 2025-05-29 01:05:45 | INFO  | Task b04bb0db-c41d-423c-8f33-32174d7eae62 is in state STARTED 2025-05-29 01:05:45.262873 | orchestrator | 2025-05-29 01:05:45 | INFO  | Task 714be9b6-46c8-4528-8b44-60523f1aad38 is in state STARTED 2025-05-29 01:05:45.263279 | orchestrator | 2025-05-29 01:05:45 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:05:45.264115 | orchestrator | 2025-05-29 01:05:45 | INFO  | Task 222c4d4a-bc6d-472e-867f-3f5b90e5bb15 is in state STARTED 2025-05-29 01:05:45.264139 | orchestrator | 2025-05-29 01:05:45 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:05:48.300799 | orchestrator | 2025-05-29 01:05:48 | INFO  | Task b0bf4f90-1ccd-4fbc-84c3-bf396bf3b873 is in state STARTED 2025-05-29 01:05:48.303958 | orchestrator | 2025-05-29 01:05:48 | INFO  | Task b04bb0db-c41d-423c-8f33-32174d7eae62 is in state STARTED 2025-05-29 01:05:48.304154 | orchestrator | 2025-05-29 01:05:48 | INFO  | Task 714be9b6-46c8-4528-8b44-60523f1aad38 is in state STARTED 2025-05-29 01:05:48.306365 | orchestrator | 2025-05-29 01:05:48 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:05:48.307636 | orchestrator | 2025-05-29 01:05:48 | INFO  | Task 222c4d4a-bc6d-472e-867f-3f5b90e5bb15 is in state STARTED 2025-05-29 01:05:48.307674 | orchestrator | 2025-05-29 01:05:48 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:05:51.339941 | orchestrator | 2025-05-29 01:05:51 | INFO  | Task b0bf4f90-1ccd-4fbc-84c3-bf396bf3b873 is in state STARTED 2025-05-29 01:05:51.340046 | orchestrator | 2025-05-29 01:05:51 | INFO  | Task b04bb0db-c41d-423c-8f33-32174d7eae62 is in state STARTED 2025-05-29 01:05:51.340484 | orchestrator | 2025-05-29 01:05:51 | INFO  | Task 714be9b6-46c8-4528-8b44-60523f1aad38 is in state STARTED 2025-05-29 01:05:51.341129 | orchestrator | 2025-05-29 01:05:51 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:05:51.341567 | orchestrator | 2025-05-29 01:05:51 | INFO  | Task 222c4d4a-bc6d-472e-867f-3f5b90e5bb15 is in state STARTED 2025-05-29 01:05:51.341668 | orchestrator | 2025-05-29 01:05:51 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:05:54.375548 | orchestrator | 2025-05-29 01:05:54 | INFO  | Task b0bf4f90-1ccd-4fbc-84c3-bf396bf3b873 is in state STARTED 2025-05-29 01:05:54.375725 | orchestrator | 2025-05-29 01:05:54 | INFO  | Task b04bb0db-c41d-423c-8f33-32174d7eae62 is in state STARTED 2025-05-29 01:05:54.375815 | orchestrator | 2025-05-29 01:05:54 | INFO  | Task 714be9b6-46c8-4528-8b44-60523f1aad38 is in state STARTED 2025-05-29 01:05:54.376201 | orchestrator | 2025-05-29 01:05:54 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:05:54.376675 | orchestrator | 2025-05-29 01:05:54 | INFO  | Task 222c4d4a-bc6d-472e-867f-3f5b90e5bb15 is in state STARTED 2025-05-29 01:05:54.376826 | orchestrator | 2025-05-29 01:05:54 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:05:57.414557 | orchestrator | 2025-05-29 01:05:57 | INFO  | Task b0bf4f90-1ccd-4fbc-84c3-bf396bf3b873 is in state STARTED 2025-05-29 01:05:57.415701 | orchestrator | 2025-05-29 01:05:57 | INFO  | Task b04bb0db-c41d-423c-8f33-32174d7eae62 is in state STARTED 2025-05-29 01:05:57.418181 | orchestrator | 2025-05-29 01:05:57 | INFO  | Task 714be9b6-46c8-4528-8b44-60523f1aad38 is in state STARTED 2025-05-29 01:05:57.418687 | orchestrator | 2025-05-29 01:05:57 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:05:57.420098 | orchestrator | 2025-05-29 01:05:57 | INFO  | Task 222c4d4a-bc6d-472e-867f-3f5b90e5bb15 is in state STARTED 2025-05-29 01:05:57.420120 | orchestrator | 2025-05-29 01:05:57 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:06:00.463308 | orchestrator | 2025-05-29 01:06:00 | INFO  | Task b0bf4f90-1ccd-4fbc-84c3-bf396bf3b873 is in state STARTED 2025-05-29 01:06:00.470272 | orchestrator | 2025-05-29 01:06:00 | INFO  | Task b04bb0db-c41d-423c-8f33-32174d7eae62 is in state STARTED 2025-05-29 01:06:00.473114 | orchestrator | 2025-05-29 01:06:00 | INFO  | Task 714be9b6-46c8-4528-8b44-60523f1aad38 is in state STARTED 2025-05-29 01:06:00.474294 | orchestrator | 2025-05-29 01:06:00 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:06:00.474762 | orchestrator | 2025-05-29 01:06:00 | INFO  | Task 222c4d4a-bc6d-472e-867f-3f5b90e5bb15 is in state STARTED 2025-05-29 01:06:00.475172 | orchestrator | 2025-05-29 01:06:00 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:06:03.516171 | orchestrator | 2025-05-29 01:06:03 | INFO  | Task b0bf4f90-1ccd-4fbc-84c3-bf396bf3b873 is in state STARTED 2025-05-29 01:06:03.516775 | orchestrator | 2025-05-29 01:06:03 | INFO  | Task b04bb0db-c41d-423c-8f33-32174d7eae62 is in state STARTED 2025-05-29 01:06:03.519201 | orchestrator | 2025-05-29 01:06:03 | INFO  | Task 714be9b6-46c8-4528-8b44-60523f1aad38 is in state STARTED 2025-05-29 01:06:03.519350 | orchestrator | 2025-05-29 01:06:03 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:06:03.520375 | orchestrator | 2025-05-29 01:06:03 | INFO  | Task 222c4d4a-bc6d-472e-867f-3f5b90e5bb15 is in state STARTED 2025-05-29 01:06:03.520418 | orchestrator | 2025-05-29 01:06:03 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:06:06.554662 | orchestrator | 2025-05-29 01:06:06 | INFO  | Task b0bf4f90-1ccd-4fbc-84c3-bf396bf3b873 is in state STARTED 2025-05-29 01:06:06.558378 | orchestrator | 2025-05-29 01:06:06 | INFO  | Task b04bb0db-c41d-423c-8f33-32174d7eae62 is in state STARTED 2025-05-29 01:06:06.561650 | orchestrator | 2025-05-29 01:06:06 | INFO  | Task 714be9b6-46c8-4528-8b44-60523f1aad38 is in state STARTED 2025-05-29 01:06:06.562202 | orchestrator | 2025-05-29 01:06:06 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:06:06.563022 | orchestrator | 2025-05-29 01:06:06 | INFO  | Task 222c4d4a-bc6d-472e-867f-3f5b90e5bb15 is in state STARTED 2025-05-29 01:06:06.563050 | orchestrator | 2025-05-29 01:06:06 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:06:09.590842 | orchestrator | 2025-05-29 01:06:09 | INFO  | Task b0bf4f90-1ccd-4fbc-84c3-bf396bf3b873 is in state STARTED 2025-05-29 01:06:09.591222 | orchestrator | 2025-05-29 01:06:09 | INFO  | Task b04bb0db-c41d-423c-8f33-32174d7eae62 is in state STARTED 2025-05-29 01:06:09.591918 | orchestrator | 2025-05-29 01:06:09 | INFO  | Task 714be9b6-46c8-4528-8b44-60523f1aad38 is in state STARTED 2025-05-29 01:06:09.592629 | orchestrator | 2025-05-29 01:06:09 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:06:09.593458 | orchestrator | 2025-05-29 01:06:09 | INFO  | Task 222c4d4a-bc6d-472e-867f-3f5b90e5bb15 is in state STARTED 2025-05-29 01:06:09.593535 | orchestrator | 2025-05-29 01:06:09 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:06:12.626711 | orchestrator | 2025-05-29 01:06:12 | INFO  | Task b0bf4f90-1ccd-4fbc-84c3-bf396bf3b873 is in state STARTED 2025-05-29 01:06:12.627755 | orchestrator | 2025-05-29 01:06:12 | INFO  | Task b04bb0db-c41d-423c-8f33-32174d7eae62 is in state STARTED 2025-05-29 01:06:12.628428 | orchestrator | 2025-05-29 01:06:12 | INFO  | Task 714be9b6-46c8-4528-8b44-60523f1aad38 is in state STARTED 2025-05-29 01:06:12.628913 | orchestrator | 2025-05-29 01:06:12 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:06:12.629817 | orchestrator | 2025-05-29 01:06:12 | INFO  | Task 222c4d4a-bc6d-472e-867f-3f5b90e5bb15 is in state STARTED 2025-05-29 01:06:12.629845 | orchestrator | 2025-05-29 01:06:12 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:06:15.654534 | orchestrator | 2025-05-29 01:06:15 | INFO  | Task b0bf4f90-1ccd-4fbc-84c3-bf396bf3b873 is in state STARTED 2025-05-29 01:06:15.654810 | orchestrator | 2025-05-29 01:06:15 | INFO  | Task b04bb0db-c41d-423c-8f33-32174d7eae62 is in state STARTED 2025-05-29 01:06:15.655928 | orchestrator | 2025-05-29 01:06:15 | INFO  | Task 977ef4a5-1708-4276-b51b-8d515656c2d7 is in state STARTED 2025-05-29 01:06:15.664844 | orchestrator | 2025-05-29 01:06:15 | INFO  | Task 714be9b6-46c8-4528-8b44-60523f1aad38 is in state SUCCESS 2025-05-29 01:06:15.665786 | orchestrator | 2025-05-29 01:06:15.665820 | orchestrator | 2025-05-29 01:06:15.665832 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-29 01:06:15.665844 | orchestrator | 2025-05-29 01:06:15.665856 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-29 01:06:15.665867 | orchestrator | Thursday 29 May 2025 01:01:16 +0000 (0:00:00.414) 0:00:00.414 ********** 2025-05-29 01:06:15.665879 | orchestrator | ok: [testbed-node-0] 2025-05-29 01:06:15.665891 | orchestrator | ok: [testbed-node-1] 2025-05-29 01:06:15.665927 | orchestrator | ok: [testbed-node-2] 2025-05-29 01:06:15.666006 | orchestrator | ok: [testbed-node-3] 2025-05-29 01:06:15.666180 | orchestrator | ok: [testbed-node-4] 2025-05-29 01:06:15.666195 | orchestrator | ok: [testbed-node-5] 2025-05-29 01:06:15.666206 | orchestrator | 2025-05-29 01:06:15.666217 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-29 01:06:15.666229 | orchestrator | Thursday 29 May 2025 01:01:17 +0000 (0:00:00.960) 0:00:01.375 ********** 2025-05-29 01:06:15.666240 | orchestrator | ok: [testbed-node-0] => (item=enable_neutron_True) 2025-05-29 01:06:15.666251 | orchestrator | ok: [testbed-node-1] => (item=enable_neutron_True) 2025-05-29 01:06:15.666262 | orchestrator | ok: [testbed-node-2] => (item=enable_neutron_True) 2025-05-29 01:06:15.666301 | orchestrator | ok: [testbed-node-3] => (item=enable_neutron_True) 2025-05-29 01:06:15.666338 | orchestrator | ok: [testbed-node-4] => (item=enable_neutron_True) 2025-05-29 01:06:15.666349 | orchestrator | ok: [testbed-node-5] => (item=enable_neutron_True) 2025-05-29 01:06:15.666360 | orchestrator | 2025-05-29 01:06:15.666372 | orchestrator | PLAY [Apply role neutron] ****************************************************** 2025-05-29 01:06:15.666385 | orchestrator | 2025-05-29 01:06:15.666399 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-05-29 01:06:15.666412 | orchestrator | Thursday 29 May 2025 01:01:17 +0000 (0:00:00.740) 0:00:02.116 ********** 2025-05-29 01:06:15.666427 | orchestrator | included: /ansible/roles/neutron/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-29 01:06:15.666441 | orchestrator | 2025-05-29 01:06:15.666455 | orchestrator | TASK [neutron : Get container facts] ******************************************* 2025-05-29 01:06:15.666470 | orchestrator | Thursday 29 May 2025 01:01:18 +0000 (0:00:01.156) 0:00:03.272 ********** 2025-05-29 01:06:15.666483 | orchestrator | ok: [testbed-node-1] 2025-05-29 01:06:15.666495 | orchestrator | ok: [testbed-node-3] 2025-05-29 01:06:15.666507 | orchestrator | ok: [testbed-node-0] 2025-05-29 01:06:15.666519 | orchestrator | ok: [testbed-node-4] 2025-05-29 01:06:15.666531 | orchestrator | ok: [testbed-node-2] 2025-05-29 01:06:15.666543 | orchestrator | ok: [testbed-node-5] 2025-05-29 01:06:15.666556 | orchestrator | 2025-05-29 01:06:15.666583 | orchestrator | TASK [neutron : Get container volume facts] ************************************ 2025-05-29 01:06:15.666597 | orchestrator | Thursday 29 May 2025 01:01:20 +0000 (0:00:01.108) 0:00:04.381 ********** 2025-05-29 01:06:15.666610 | orchestrator | ok: [testbed-node-0] 2025-05-29 01:06:15.666622 | orchestrator | ok: [testbed-node-1] 2025-05-29 01:06:15.666673 | orchestrator | ok: [testbed-node-2] 2025-05-29 01:06:15.666686 | orchestrator | ok: [testbed-node-3] 2025-05-29 01:06:15.666699 | orchestrator | ok: [testbed-node-4] 2025-05-29 01:06:15.666711 | orchestrator | ok: [testbed-node-5] 2025-05-29 01:06:15.666724 | orchestrator | 2025-05-29 01:06:15.666738 | orchestrator | TASK [neutron : Check for ML2/OVN presence] ************************************ 2025-05-29 01:06:15.666749 | orchestrator | Thursday 29 May 2025 01:01:21 +0000 (0:00:01.106) 0:00:05.488 ********** 2025-05-29 01:06:15.666760 | orchestrator | ok: [testbed-node-0] => { 2025-05-29 01:06:15.666772 | orchestrator |  "changed": false, 2025-05-29 01:06:15.666783 | orchestrator |  "msg": "All assertions passed" 2025-05-29 01:06:15.666795 | orchestrator | } 2025-05-29 01:06:15.666806 | orchestrator | ok: [testbed-node-1] => { 2025-05-29 01:06:15.666817 | orchestrator |  "changed": false, 2025-05-29 01:06:15.666828 | orchestrator |  "msg": "All assertions passed" 2025-05-29 01:06:15.666838 | orchestrator | } 2025-05-29 01:06:15.666849 | orchestrator | ok: [testbed-node-2] => { 2025-05-29 01:06:15.666860 | orchestrator |  "changed": false, 2025-05-29 01:06:15.666871 | orchestrator |  "msg": "All assertions passed" 2025-05-29 01:06:15.666882 | orchestrator | } 2025-05-29 01:06:15.666892 | orchestrator | ok: [testbed-node-3] => { 2025-05-29 01:06:15.666903 | orchestrator |  "changed": false, 2025-05-29 01:06:15.666914 | orchestrator |  "msg": "All assertions passed" 2025-05-29 01:06:15.666925 | orchestrator | } 2025-05-29 01:06:15.666936 | orchestrator | ok: [testbed-node-4] => { 2025-05-29 01:06:15.667009 | orchestrator |  "changed": false, 2025-05-29 01:06:15.667021 | orchestrator |  "msg": "All assertions passed" 2025-05-29 01:06:15.667032 | orchestrator | } 2025-05-29 01:06:15.667043 | orchestrator | ok: [testbed-node-5] => { 2025-05-29 01:06:15.667054 | orchestrator |  "changed": false, 2025-05-29 01:06:15.667065 | orchestrator |  "msg": "All assertions passed" 2025-05-29 01:06:15.667076 | orchestrator | } 2025-05-29 01:06:15.667087 | orchestrator | 2025-05-29 01:06:15.667098 | orchestrator | TASK [neutron : Check for ML2/OVS presence] ************************************ 2025-05-29 01:06:15.667109 | orchestrator | Thursday 29 May 2025 01:01:21 +0000 (0:00:00.845) 0:00:06.334 ********** 2025-05-29 01:06:15.667120 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:06:15.667130 | orchestrator | skipping: [testbed-node-1] 2025-05-29 01:06:15.667149 | orchestrator | skipping: [testbed-node-2] 2025-05-29 01:06:15.667160 | orchestrator | skipping: [testbed-node-3] 2025-05-29 01:06:15.667171 | orchestrator | skipping: [testbed-node-4] 2025-05-29 01:06:15.667182 | orchestrator | skipping: [testbed-node-5] 2025-05-29 01:06:15.667193 | orchestrator | 2025-05-29 01:06:15.667204 | orchestrator | TASK [service-ks-register : neutron | Creating services] *********************** 2025-05-29 01:06:15.667215 | orchestrator | Thursday 29 May 2025 01:01:22 +0000 (0:00:00.961) 0:00:07.296 ********** 2025-05-29 01:06:15.667226 | orchestrator | changed: [testbed-node-0] => (item=neutron (network)) 2025-05-29 01:06:15.667237 | orchestrator | 2025-05-29 01:06:15.667248 | orchestrator | TASK [service-ks-register : neutron | Creating endpoints] ********************** 2025-05-29 01:06:15.667259 | orchestrator | Thursday 29 May 2025 01:01:26 +0000 (0:00:03.486) 0:00:10.782 ********** 2025-05-29 01:06:15.667270 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api-int.testbed.osism.xyz:9696 -> internal) 2025-05-29 01:06:15.667282 | orchestrator | changed: [testbed-node-0] => (item=neutron -> https://api.testbed.osism.xyz:9696 -> public) 2025-05-29 01:06:15.667293 | orchestrator | 2025-05-29 01:06:15.667318 | orchestrator | TASK [service-ks-register : neutron | Creating projects] *********************** 2025-05-29 01:06:15.667329 | orchestrator | Thursday 29 May 2025 01:01:32 +0000 (0:00:06.415) 0:00:17.198 ********** 2025-05-29 01:06:15.667340 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-05-29 01:06:15.667351 | orchestrator | 2025-05-29 01:06:15.667362 | orchestrator | TASK [service-ks-register : neutron | Creating users] ************************** 2025-05-29 01:06:15.667373 | orchestrator | Thursday 29 May 2025 01:01:36 +0000 (0:00:03.303) 0:00:20.501 ********** 2025-05-29 01:06:15.667384 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-29 01:06:15.667394 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service) 2025-05-29 01:06:15.667405 | orchestrator | 2025-05-29 01:06:15.667416 | orchestrator | TASK [service-ks-register : neutron | Creating roles] ************************** 2025-05-29 01:06:15.667427 | orchestrator | Thursday 29 May 2025 01:01:39 +0000 (0:00:03.833) 0:00:24.335 ********** 2025-05-29 01:06:15.667438 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-05-29 01:06:15.667448 | orchestrator | 2025-05-29 01:06:15.667459 | orchestrator | TASK [service-ks-register : neutron | Granting user roles] ********************* 2025-05-29 01:06:15.667470 | orchestrator | Thursday 29 May 2025 01:01:43 +0000 (0:00:03.391) 0:00:27.727 ********** 2025-05-29 01:06:15.667480 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> admin) 2025-05-29 01:06:15.667491 | orchestrator | changed: [testbed-node-0] => (item=neutron -> service -> service) 2025-05-29 01:06:15.667502 | orchestrator | 2025-05-29 01:06:15.667513 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-05-29 01:06:15.667523 | orchestrator | Thursday 29 May 2025 01:01:51 +0000 (0:00:08.347) 0:00:36.074 ********** 2025-05-29 01:06:15.667534 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:06:15.667545 | orchestrator | skipping: [testbed-node-1] 2025-05-29 01:06:15.667556 | orchestrator | skipping: [testbed-node-2] 2025-05-29 01:06:15.667567 | orchestrator | skipping: [testbed-node-3] 2025-05-29 01:06:15.667577 | orchestrator | skipping: [testbed-node-4] 2025-05-29 01:06:15.667588 | orchestrator | skipping: [testbed-node-5] 2025-05-29 01:06:15.667599 | orchestrator | 2025-05-29 01:06:15.667610 | orchestrator | TASK [Load and persist kernel modules] ***************************************** 2025-05-29 01:06:15.667620 | orchestrator | Thursday 29 May 2025 01:01:52 +0000 (0:00:00.757) 0:00:36.832 ********** 2025-05-29 01:06:15.667631 | orchestrator | skipping: [testbed-node-1] 2025-05-29 01:06:15.667642 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:06:15.667652 | orchestrator | skipping: [testbed-node-2] 2025-05-29 01:06:15.667663 | orchestrator | skipping: [testbed-node-3] 2025-05-29 01:06:15.667674 | orchestrator | skipping: [testbed-node-4] 2025-05-29 01:06:15.667684 | orchestrator | skipping: [testbed-node-5] 2025-05-29 01:06:15.667695 | orchestrator | 2025-05-29 01:06:15.667712 | orchestrator | TASK [neutron : Check IPv6 support] ******************************************** 2025-05-29 01:06:15.667729 | orchestrator | Thursday 29 May 2025 01:01:56 +0000 (0:00:04.093) 0:00:40.925 ********** 2025-05-29 01:06:15.667740 | orchestrator | ok: [testbed-node-0] 2025-05-29 01:06:15.667751 | orchestrator | ok: [testbed-node-1] 2025-05-29 01:06:15.667762 | orchestrator | ok: [testbed-node-2] 2025-05-29 01:06:15.667772 | orchestrator | ok: [testbed-node-3] 2025-05-29 01:06:15.667783 | orchestrator | ok: [testbed-node-4] 2025-05-29 01:06:15.667794 | orchestrator | ok: [testbed-node-5] 2025-05-29 01:06:15.667804 | orchestrator | 2025-05-29 01:06:15.667815 | orchestrator | TASK [Setting sysctl values] *************************************************** 2025-05-29 01:06:15.667826 | orchestrator | Thursday 29 May 2025 01:01:57 +0000 (0:00:00.895) 0:00:41.820 ********** 2025-05-29 01:06:15.667837 | orchestrator | skipping: [testbed-node-1] 2025-05-29 01:06:15.667848 | orchestrator | skipping: [testbed-node-3] 2025-05-29 01:06:15.667859 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:06:15.667869 | orchestrator | skipping: [testbed-node-2] 2025-05-29 01:06:15.667880 | orchestrator | skipping: [testbed-node-4] 2025-05-29 01:06:15.667891 | orchestrator | skipping: [testbed-node-5] 2025-05-29 01:06:15.667901 | orchestrator | 2025-05-29 01:06:15.667912 | orchestrator | TASK [neutron : Ensuring config directories exist] ***************************** 2025-05-29 01:06:15.667923 | orchestrator | Thursday 29 May 2025 01:02:00 +0000 (0:00:02.870) 0:00:44.691 ********** 2025-05-29 01:06:15.667937 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-29 01:06:15.667962 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.668006 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.668025 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.668045 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-29 01:06:15.668057 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.668071 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:06:15.668092 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:06:15.668105 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-29 01:06:15.668117 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.668142 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.668154 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.668165 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.668183 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-29 01:06:15.668196 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-29 01:06:15.668226 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.668238 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.668249 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:06:15.668261 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:06:15.668278 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.668290 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.668312 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.668324 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-29 01:06:15.668336 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.668347 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:06:15.668370 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:06:15.668391 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.668433 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-29 01:06:15.668454 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.668474 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.668504 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.668524 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-29 01:06:15.668555 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.668584 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:06:15.668606 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:06:15.668619 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.668630 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-29 01:06:15.669162 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.669207 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 01:06:15.669228 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:06:15.669257 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.669278 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-29 01:06:15.669301 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-29 01:06:15.669323 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.669343 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-29 01:06:15.669388 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.669400 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.669568 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.669603 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-29 01:06:15.669633 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-29 01:06:15.669660 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.669681 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.669699 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.669718 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:06:15.669752 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.669765 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:06:15.669782 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-29 01:06:15.669794 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.669807 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.669819 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:06:15.669843 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-29 01:06:15.669856 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.669868 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:06:15.669885 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.669897 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 01:06:15.669909 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-29 01:06:15.669926 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:06:15.669944 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.669956 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 01:06:15.669998 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.670064 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:06:15.670080 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-29 01:06:15.670108 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.670169 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-29 01:06:15.670183 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-29 01:06:15.670194 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-29 01:06:15.670204 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.670218 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.670447 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-29 01:06:15.670465 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.670475 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 01:06:15.670490 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:06:15.670501 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-29 01:06:15.670511 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.670529 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.670546 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-29 01:06:15.670557 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 01:06:15.670572 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-29 01:06:15.670582 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.670598 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:06:15.670614 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-29 01:06:15.670625 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.670635 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.670649 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-29 01:06:15.670660 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 01:06:15.670676 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-29 01:06:15.670691 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.670701 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:06:15.670711 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.670743 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-29 01:06:15.670755 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-29 01:06:15.670795 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.670807 | orchestrator | 2025-05-29 01:06:15.670818 | orchestrator | TASK [neutron : Check if extra ml2 plugins exists] ***************************** 2025-05-29 01:06:15.670828 | orchestrator | Thursday 29 May 2025 01:02:03 +0000 (0:00:03.409) 0:00:48.101 ********** 2025-05-29 01:06:15.670838 | orchestrator | [WARNING]: Skipped 2025-05-29 01:06:15.670848 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' path 2025-05-29 01:06:15.670858 | orchestrator | due to this access issue: 2025-05-29 01:06:15.670873 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/neutron/plugins/' is not 2025-05-29 01:06:15.670883 | orchestrator | a directory 2025-05-29 01:06:15.670893 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-29 01:06:15.670922 | orchestrator | 2025-05-29 01:06:15.670932 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-05-29 01:06:15.670942 | orchestrator | Thursday 29 May 2025 01:02:04 +0000 (0:00:00.549) 0:00:48.651 ********** 2025-05-29 01:06:15.670952 | orchestrator | included: /ansible/roles/neutron/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-29 01:06:15.670962 | orchestrator | 2025-05-29 01:06:15.671035 | orchestrator | TASK [service-cert-copy : neutron | Copying over extra CA certificates] ******** 2025-05-29 01:06:15.671075 | orchestrator | Thursday 29 May 2025 01:02:05 +0000 (0:00:01.399) 0:00:50.050 ********** 2025-05-29 01:06:15.671084 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-29 01:06:15.671098 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-29 01:06:15.671113 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-29 01:06:15.671121 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-29 01:06:15.671135 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-29 01:06:15.671144 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-29 01:06:15.671153 | orchestrator | 2025-05-29 01:06:15.671161 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS certificate] *** 2025-05-29 01:06:15.671169 | orchestrator | Thursday 29 May 2025 01:02:11 +0000 (0:00:05.369) 0:00:55.419 ********** 2025-05-29 01:06:15.671186 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-29 01:06:15.671194 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:06:15.671203 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-29 01:06:15.671211 | orchestrator | skipping: [testbed-node-1] 2025-05-29 01:06:15.671224 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-29 01:06:15.671233 | orchestrator | skipping: [testbed-node-2] 2025-05-29 01:06:15.671241 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-29 01:06:15.671249 | orchestrator | skipping: [testbed-node-3] 2025-05-29 01:06:15.671265 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-29 01:06:15.671278 | orchestrator | skipping: [testbed-node-4] 2025-05-29 01:06:15.671286 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-29 01:06:15.671295 | orchestrator | skipping: [testbed-node-5] 2025-05-29 01:06:15.671302 | orchestrator | 2025-05-29 01:06:15.671395 | orchestrator | TASK [service-cert-copy : neutron | Copying over backend internal TLS key] ***** 2025-05-29 01:06:15.671409 | orchestrator | Thursday 29 May 2025 01:02:14 +0000 (0:00:03.704) 0:00:59.124 ********** 2025-05-29 01:06:15.671424 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-29 01:06:15.671438 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:06:15.671454 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-29 01:06:15.671462 | orchestrator | skipping: [testbed-node-2] 2025-05-29 01:06:15.671471 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-29 01:06:15.671485 | orchestrator | skipping: [testbed-node-5] 2025-05-29 01:06:15.671498 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-29 01:06:15.671507 | orchestrator | skipping: [testbed-node-4] 2025-05-29 01:06:15.671515 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-29 01:06:15.671523 | orchestrator | skipping: [testbed-node-3] 2025-05-29 01:06:15.671531 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-29 01:06:15.671540 | orchestrator | skipping: [testbed-node-1] 2025-05-29 01:06:15.671547 | orchestrator | 2025-05-29 01:06:15.671560 | orchestrator | TASK [neutron : Creating TLS backend PEM File] ********************************* 2025-05-29 01:06:15.671568 | orchestrator | Thursday 29 May 2025 01:02:19 +0000 (0:00:04.445) 0:01:03.570 ********** 2025-05-29 01:06:15.671577 | orchestrator | skipping: [testbed-node-1] 2025-05-29 01:06:15.671585 | orchestrator | skipping: [testbed-node-3] 2025-05-29 01:06:15.671592 | orchestrator | skipping: [testbed-node-2] 2025-05-29 01:06:15.671600 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:06:15.671608 | orchestrator | skipping: [testbed-node-5] 2025-05-29 01:06:15.671616 | orchestrator | skipping: [testbed-node-4] 2025-05-29 01:06:15.671624 | orchestrator | 2025-05-29 01:06:15.671631 | orchestrator | TASK [neutron : Check if policies shall be overwritten] ************************ 2025-05-29 01:06:15.671639 | orchestrator | Thursday 29 May 2025 01:02:24 +0000 (0:00:05.484) 0:01:09.054 ********** 2025-05-29 01:06:15.671690 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:06:15.671699 | orchestrator | 2025-05-29 01:06:15.671707 | orchestrator | TASK [neutron : Set neutron policy file] *************************************** 2025-05-29 01:06:15.671760 | orchestrator | Thursday 29 May 2025 01:02:24 +0000 (0:00:00.168) 0:01:09.222 ********** 2025-05-29 01:06:15.671768 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:06:15.671776 | orchestrator | skipping: [testbed-node-1] 2025-05-29 01:06:15.671784 | orchestrator | skipping: [testbed-node-2] 2025-05-29 01:06:15.671791 | orchestrator | skipping: [testbed-node-3] 2025-05-29 01:06:15.671799 | orchestrator | skipping: [testbed-node-4] 2025-05-29 01:06:15.671807 | orchestrator | skipping: [testbed-node-5] 2025-05-29 01:06:15.671815 | orchestrator | 2025-05-29 01:06:15.671823 | orchestrator | TASK [neutron : Copying over existing policy file] ***************************** 2025-05-29 01:06:15.671830 | orchestrator | Thursday 29 May 2025 01:02:25 +0000 (0:00:00.895) 0:01:10.118 ********** 2025-05-29 01:06:15.671843 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-29 01:06:15.671852 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.671860 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.671874 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.671889 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-29 01:06:15.671897 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.671910 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:06:15.671919 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:06:15.671927 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.671936 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-29 01:06:15.671954 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.671963 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 01:06:15.671971 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:06:15.671998 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.672007 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-29 01:06:15.672016 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-29 01:06:15.672035 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.672044 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:06:15.672053 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-29 01:06:15.672065 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.672074 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-29 01:06:15.672082 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.672100 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.672109 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.672121 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.672130 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.672138 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-29 01:06:15.672157 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-29 01:06:15.672166 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.672174 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:06:15.672189 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:06:15.672197 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.672206 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.672223 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-29 01:06:15.672232 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:06:15.672240 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.672249 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 01:06:15.672261 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:06:15.672269 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:06:15.672278 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.672291 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.672316 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-29 01:06:15.672326 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-29 01:06:15.672340 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.672349 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-29 01:06:15.672417 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 01:06:15.672430 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.672439 | orchestrator | skipping: [testbed-node-1] 2025-05-29 01:06:15.672448 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:06:15.672478 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.672491 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-29 01:06:15.672500 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-29 01:06:15.672514 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.672522 | orchestrator | skipping: [testbed-node-2] 2025-05-29 01:06:15.672536 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-29 01:06:15.672546 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.672557 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.672566 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.672579 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-29 01:06:15.672592 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.672601 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:06:15.672610 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:06:15.672621 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.672630 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-29 01:06:15.672647 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.672656 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 01:06:15.672668 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:06:15.672677 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.672689 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-29 01:06:15.672698 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-29 01:06:15.672712 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.672720 | orchestrator | skipping: [testbed-node-4] 2025-05-29 01:06:15.672732 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-29 01:06:15.672741 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.672750 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.672768 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.672784 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-29 01:06:15.672792 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.672805 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:06:15.672814 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:06:15.672822 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.672834 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-29 01:06:15.672850 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.672859 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 01:06:15.672867 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:06:15.672880 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.672889 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-29 01:06:15.672901 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-29 01:06:15.672915 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.672924 | orchestrator | skipping: [testbed-node-3] 2025-05-29 01:06:15.672932 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-29 01:06:15.673291 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.673307 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.673320 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.673335 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-29 01:06:15.673344 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.673353 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:06:15.673367 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:06:15.673376 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.673385 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-29 01:06:15.673401 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.673410 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 01:06:15.673419 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:06:15.673430 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.673439 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-29 01:06:15.673448 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-29 01:06:15.673465 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.673474 | orchestrator | skipping: [testbed-node-5] 2025-05-29 01:06:15.673482 | orchestrator | 2025-05-29 01:06:15.673490 | orchestrator | TASK [neutron : Copying over config.json files for services] ******************* 2025-05-29 01:06:15.673498 | orchestrator | Thursday 29 May 2025 01:02:29 +0000 (0:00:04.003) 0:01:14.122 ********** 2025-05-29 01:06:15.673507 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-29 01:06:15.673521 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.673529 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.673542 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.673554 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-29 01:06:15.673563 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.673571 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:06:15.673579 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:06:15.673592 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.673601 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-29 01:06:15.673618 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.673627 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-29 01:06:15.673635 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.673648 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.673664 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.673676 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.673685 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.673693 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-29 01:06:15.673708 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.673716 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-29 01:06:15.673729 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:06:15.673741 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:06:15.673750 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.673863 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.673873 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:06:15.673886 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-29 01:06:15.673901 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:06:15.673910 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.673922 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.673931 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 01:06:15.673941 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:06:15.674117 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-29 01:06:15.674133 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.674150 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.674165 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-29 01:06:15.674175 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 01:06:15.674185 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-29 01:06:15.674202 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-29 01:06:15.674216 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:06:15.674224 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.674236 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.674245 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.674253 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.674264 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.674277 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-29 01:06:15.674288 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-29 01:06:15.674297 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.674305 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-29 01:06:15.674312 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:06:15.674327 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.674334 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:06:15.674345 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.674352 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-29 01:06:15.674359 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.674414 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 01:06:15.674432 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:06:15.674439 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.674482 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-29 01:06:15.674491 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-29 01:06:15.674498 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.674508 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-29 01:06:15.674521 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.674528 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.674538 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.674546 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-29 01:06:15.674553 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.674569 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:06:15.674626 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:06:15.674635 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.674646 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-29 01:06:15.674653 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.674660 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 01:06:15.674671 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:06:15.674682 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.674689 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-29 01:06:15.674700 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-29 01:06:15.674707 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.674714 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-29 01:06:15.674730 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.674737 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.674757 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.674765 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-29 01:06:15.674772 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.674783 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:06:15.674795 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:06:15.674802 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.674809 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-29 01:06:15.674820 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.674835 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 01:06:15.674847 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:06:15.674854 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.674866 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-29 01:06:15.674874 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-29 01:06:15.674884 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.674891 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-29 01:06:15.674903 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.674913 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 01:06:15.674921 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:06:15.674928 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.674938 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-29 01:06:15.674951 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-29 01:06:15.674958 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.674965 | orchestrator | 2025-05-29 01:06:15.674985 | orchestrator | TASK [neutron : Copying over neutron.conf] ************************************* 2025-05-29 01:06:15.674992 | orchestrator | Thursday 29 May 2025 01:02:35 +0000 (0:00:05.558) 0:01:19.680 ********** 2025-05-29 01:06:15.675013 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-29 01:06:15.675021 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.675032 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.675043 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.675050 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-29 01:06:15.675068 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.675075 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:06:15.675082 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:06:15.675092 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.675104 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-29 01:06:15.675140 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.675151 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.675158 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.675169 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-29 01:06:15.675182 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.675190 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:06:15.675197 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:06:15.675204 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.675232 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-29 01:06:15.675240 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.675254 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.675261 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.675268 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-29 01:06:15.675279 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.675287 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:06:15.675294 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:06:15.675309 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.675316 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-29 01:06:15.675323 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.675334 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.675341 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.675355 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-29 01:06:15.675363 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.675370 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:06:15.675377 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:06:15.675388 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.675395 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-29 01:06:15.675406 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.675416 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 01:06:15.675424 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:06:15.675431 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.675442 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-29 01:06:15.675450 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-29 01:06:15.675464 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.675474 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-29 01:06:15.675481 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.675492 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.675499 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.675517 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-29 01:06:15.675527 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.675534 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:06:15.675541 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:06:15.675548 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.675560 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-29 01:06:15.675567 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.675577 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 01:06:15.675587 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:06:15.675594 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.675601 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-29 01:06:15.675614 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-29 01:06:15.675625 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.675635 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-29 01:06:15.675642 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.675649 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.675660 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.675671 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-29 01:06:15.675678 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.675688 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:06:15.675696 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:06:15.675703 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.675710 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-29 01:06:15.675721 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.675732 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 01:06:15.675739 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:06:15.675749 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.675757 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-29 01:06:15.675764 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-29 01:06:15.675780 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.675787 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-29 01:06:15.675797 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.675805 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 01:06:15.675812 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:06:15.675819 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.675834 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-29 01:06:15.675841 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-29 01:06:15.675853 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.675860 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-29 01:06:15.675867 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.675874 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 01:06:15.675889 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:06:15.675896 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.675906 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-29 01:06:15.675914 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-29 01:06:15.675921 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.675935 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-29 01:06:15.675943 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.675950 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 01:06:15.675960 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:06:15.675967 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.675987 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-29 01:06:15.676195 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-29 01:06:15.676207 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.676214 | orchestrator | 2025-05-29 01:06:15.676221 | orchestrator | TASK [neutron : Copying over neutron_vpnaas.conf] ****************************** 2025-05-29 01:06:15.676228 | orchestrator | Thursday 29 May 2025 01:02:42 +0000 (0:00:07.392) 0:01:27.073 ********** 2025-05-29 01:06:15.676239 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-29 01:06:15.676247 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.676255 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.676271 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.676278 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-29 01:06:15.676285 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.676296 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:06:15.676303 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:06:15.676310 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.676322 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-29 01:06:15.676332 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.676340 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 01:06:15.676347 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:06:15.676357 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.676364 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-29 01:06:15.676376 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-29 01:06:15.676387 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-29 01:06:15.676394 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.676405 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.676412 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.676423 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.676434 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-29 01:06:15.676442 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.676449 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:06:15.676459 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:06:15.676466 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.676479 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-29 01:06:15.676487 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.676497 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 01:06:15.676505 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:06:15.676512 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.676522 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-29 01:06:15.676533 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-29 01:06:15.676540 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.676547 | orchestrator | skipping: [testbed-node-3] 2025-05-29 01:06:15.676558 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-29 01:06:15.676566 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.676576 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.676588 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.676595 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-29 01:06:15.676605 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.676613 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:06:15.676620 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:06:15.676630 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.676642 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-29 01:06:15.676649 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.676659 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 01:06:15.676667 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:06:15.676674 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.676681 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-29 01:06:15.676692 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-29 01:06:15.676721 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-29 01:06:15.676734 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.676741 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-29 01:06:15.676751 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.676764 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.676771 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-29 01:06:15.676782 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.676790 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.676800 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.676812 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.676820 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.676831 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-29 01:06:15.676838 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.676845 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.676860 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-29 01:06:15.676867 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.676874 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-29 01:06:15.676885 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.676892 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.676900 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:06:15.676917 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:06:15.676925 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:06:15.677074 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:06:15.677085 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:06:15.677097 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.677105 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:06:15.677112 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-29 01:06:15.677129 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.677136 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.677143 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.677153 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-29 01:06:15.677161 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 01:06:15.677168 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-29 01:06:15.677215 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:06:15.677224 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.677231 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.677238 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 01:06:15.677250 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 01:06:15.677258 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.677270 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:06:15.677281 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:06:15.677288 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-29 01:06:15.677296 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.677522 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.677542 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-29 01:06:15.677564 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-29 01:06:15.677578 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-29 01:06:15.677586 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-29 01:06:15.677592 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-29 01:06:15.677632 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.677663 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.677677 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.677687 | orchestrator | skipping: [testbed-node-4] 2025-05-29 01:06:15.677694 | orchestrator | skipping: [testbed-node-5] 2025-05-29 01:06:15.677701 | orchestrator | 2025-05-29 01:06:15.677707 | orchestrator | TASK [neutron : Copying over ssh key] ****************************************** 2025-05-29 01:06:15.677714 | orchestrator | Thursday 29 May 2025 01:02:47 +0000 (0:00:04.782) 0:01:31.855 ********** 2025-05-29 01:06:15.677720 | orchestrator | skipping: [testbed-node-3] 2025-05-29 01:06:15.677726 | orchestrator | skipping: [testbed-node-4] 2025-05-29 01:06:15.677732 | orchestrator | changed: [testbed-node-1] 2025-05-29 01:06:15.677739 | orchestrator | skipping: [testbed-node-5] 2025-05-29 01:06:15.677745 | orchestrator | changed: [testbed-node-2] 2025-05-29 01:06:15.677788 | orchestrator | changed: [testbed-node-0] 2025-05-29 01:06:15.677795 | orchestrator | 2025-05-29 01:06:15.677801 | orchestrator | TASK [neutron : Copying over ml2_conf.ini] ************************************* 2025-05-29 01:06:15.677807 | orchestrator | Thursday 29 May 2025 01:02:52 +0000 (0:00:04.806) 0:01:36.661 ********** 2025-05-29 01:06:15.677814 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-29 01:06:15.677826 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.677837 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.677848 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.677855 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-29 01:06:15.677861 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.677868 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:06:15.677882 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:06:15.677889 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.677895 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-29 01:06:15.677907 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.677914 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 01:06:15.677921 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:06:15.677927 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.677942 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-29 01:06:15.677950 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-29 01:06:15.677960 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.677966 | orchestrator | skipping: [testbed-node-3] 2025-05-29 01:06:15.677994 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-29 01:06:15.678002 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.678048 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.678057 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.678068 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-29 01:06:15.678075 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.678082 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:06:15.678099 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:06:15.678110 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.678117 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-29 01:06:15.678125 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.678136 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 01:06:15.678144 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:06:15.678152 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.678170 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-29 01:06:15.678179 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-29 01:06:15.678186 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.678197 | orchestrator | skipping: [testbed-node-4] 2025-05-29 01:06:15.678205 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-29 01:06:15.678213 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.678227 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.678235 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.678242 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-29 01:06:15.678252 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.678258 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:06:15.678269 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:06:15.678275 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.678286 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-29 01:06:15.678292 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.678302 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 01:06:15.678309 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:06:15.678316 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.678326 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-29 01:06:15.678336 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-29 01:06:15.678343 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.678350 | orchestrator | skipping: [testbed-node-5] 2025-05-29 01:06:15.678359 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-29 01:06:15.678367 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.678379 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.678389 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.678396 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-29 01:06:15.678405 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.678412 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-29 01:06:15.678422 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.678429 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:06:15.678439 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.678446 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:06:15.678456 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.678462 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.678473 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-29 01:06:15.678482 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-29 01:06:15.678489 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.678496 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.678505 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 01:06:15.678516 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:06:15.678523 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:06:15.678529 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:06:15.678536 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.678546 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.678553 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-29 01:06:15.678563 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-29 01:06:15.678575 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.678581 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-29 01:06:15.678599 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.678606 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 01:06:15.678613 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:06:15.678631 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.678642 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-29 01:06:15.678649 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-29 01:06:15.678659 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.678666 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-29 01:06:15.678676 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.678687 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.678698 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.678715 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-29 01:06:15.678726 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.678737 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:06:15.678757 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:06:15.678768 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.678779 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-29 01:06:15.678789 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.678807 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 01:06:15.678818 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:06:15.678829 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.678854 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-29 01:06:15.678864 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-29 01:06:15.678893 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.678900 | orchestrator | 2025-05-29 01:06:15.678907 | orchestrator | TASK [neutron : Copying over linuxbridge_agent.ini] **************************** 2025-05-29 01:06:15.678913 | orchestrator | Thursday 29 May 2025 01:02:56 +0000 (0:00:03.892) 0:01:40.554 ********** 2025-05-29 01:06:15.678920 | orchestrator | skipping: [testbed-node-1] 2025-05-29 01:06:15.678931 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:06:15.678937 | orchestrator | skipping: [testbed-node-2] 2025-05-29 01:06:15.678944 | orchestrator | skipping: [testbed-node-4] 2025-05-29 01:06:15.678950 | orchestrator | skipping: [testbed-node-3] 2025-05-29 01:06:15.678956 | orchestrator | skipping: [testbed-node-5] 2025-05-29 01:06:15.678962 | orchestrator | 2025-05-29 01:06:15.678968 | orchestrator | TASK [neutron : Copying over openvswitch_agent.ini] **************************** 2025-05-29 01:06:15.679082 | orchestrator | Thursday 29 May 2025 01:02:59 +0000 (0:00:03.127) 0:01:43.682 ********** 2025-05-29 01:06:15.679090 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:06:15.679096 | orchestrator | skipping: [testbed-node-4] 2025-05-29 01:06:15.679108 | orchestrator | skipping: [testbed-node-2] 2025-05-29 01:06:15.679115 | orchestrator | skipping: [testbed-node-1] 2025-05-29 01:06:15.679121 | orchestrator | skipping: [testbed-node-3] 2025-05-29 01:06:15.679127 | orchestrator | skipping: [testbed-node-5] 2025-05-29 01:06:15.679133 | orchestrator | 2025-05-29 01:06:15.679176 | orchestrator | TASK [neutron : Copying over sriov_agent.ini] ********************************** 2025-05-29 01:06:15.679183 | orchestrator | Thursday 29 May 2025 01:03:02 +0000 (0:00:02.886) 0:01:46.568 ********** 2025-05-29 01:06:15.679189 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:06:15.679195 | orchestrator | skipping: [testbed-node-2] 2025-05-29 01:06:15.679201 | orchestrator | skipping: [testbed-node-1] 2025-05-29 01:06:15.679208 | orchestrator | skipping: [testbed-node-3] 2025-05-29 01:06:15.679214 | orchestrator | skipping: [testbed-node-4] 2025-05-29 01:06:15.679220 | orchestrator | skipping: [testbed-node-5] 2025-05-29 01:06:15.679226 | orchestrator | 2025-05-29 01:06:15.679232 | orchestrator | TASK [neutron : Copying over mlnx_agent.ini] *********************************** 2025-05-29 01:06:15.679238 | orchestrator | Thursday 29 May 2025 01:03:04 +0000 (0:00:02.542) 0:01:49.110 ********** 2025-05-29 01:06:15.679245 | orchestrator | skipping: [testbed-node-2] 2025-05-29 01:06:15.679251 | orchestrator | skipping: [testbed-node-3] 2025-05-29 01:06:15.679257 | orchestrator | skipping: [testbed-node-1] 2025-05-29 01:06:15.679262 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:06:15.679267 | orchestrator | skipping: [testbed-node-4] 2025-05-29 01:06:15.679273 | orchestrator | skipping: [testbed-node-5] 2025-05-29 01:06:15.679278 | orchestrator | 2025-05-29 01:06:15.679284 | orchestrator | TASK [neutron : Copying over eswitchd.conf] ************************************ 2025-05-29 01:06:15.679289 | orchestrator | Thursday 29 May 2025 01:03:08 +0000 (0:00:03.931) 0:01:53.042 ********** 2025-05-29 01:06:15.679294 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:06:15.679300 | orchestrator | skipping: [testbed-node-2] 2025-05-29 01:06:15.679305 | orchestrator | skipping: [testbed-node-1] 2025-05-29 01:06:15.679310 | orchestrator | skipping: [testbed-node-4] 2025-05-29 01:06:15.679316 | orchestrator | skipping: [testbed-node-3] 2025-05-29 01:06:15.679321 | orchestrator | skipping: [testbed-node-5] 2025-05-29 01:06:15.679327 | orchestrator | 2025-05-29 01:06:15.679336 | orchestrator | TASK [neutron : Copying over dhcp_agent.ini] *********************************** 2025-05-29 01:06:15.679341 | orchestrator | Thursday 29 May 2025 01:03:10 +0000 (0:00:01.926) 0:01:54.969 ********** 2025-05-29 01:06:15.679347 | orchestrator | skipping: [testbed-node-3] 2025-05-29 01:06:15.679352 | orchestrator | skipping: [testbed-node-2] 2025-05-29 01:06:15.679357 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:06:15.679363 | orchestrator | skipping: [testbed-node-1] 2025-05-29 01:06:15.679368 | orchestrator | skipping: [testbed-node-4] 2025-05-29 01:06:15.679373 | orchestrator | skipping: [testbed-node-5] 2025-05-29 01:06:15.679379 | orchestrator | 2025-05-29 01:06:15.679384 | orchestrator | TASK [neutron : Copying over dnsmasq.conf] ************************************* 2025-05-29 01:06:15.679390 | orchestrator | Thursday 29 May 2025 01:03:12 +0000 (0:00:01.847) 0:01:56.816 ********** 2025-05-29 01:06:15.679395 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-05-29 01:06:15.679401 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:06:15.679406 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-05-29 01:06:15.679412 | orchestrator | skipping: [testbed-node-1] 2025-05-29 01:06:15.679417 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-05-29 01:06:15.679423 | orchestrator | skipping: [testbed-node-2] 2025-05-29 01:06:15.679428 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-05-29 01:06:15.679433 | orchestrator | skipping: [testbed-node-3] 2025-05-29 01:06:15.679439 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-05-29 01:06:15.679444 | orchestrator | skipping: [testbed-node-4] 2025-05-29 01:06:15.679454 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/dnsmasq.conf.j2)  2025-05-29 01:06:15.679459 | orchestrator | skipping: [testbed-node-5] 2025-05-29 01:06:15.679465 | orchestrator | 2025-05-29 01:06:15.679470 | orchestrator | TASK [neutron : Copying over l3_agent.ini] ************************************* 2025-05-29 01:06:15.679475 | orchestrator | Thursday 29 May 2025 01:03:14 +0000 (0:00:01.804) 0:01:58.620 ********** 2025-05-29 01:06:15.679486 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-29 01:06:15.679492 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.679498 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.679506 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.679512 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-29 01:06:15.679522 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.679531 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:06:15.679538 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:06:15.679543 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.679552 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-29 01:06:15.679558 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.679567 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 01:06:15.679573 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:06:15.679583 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.679589 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-29 01:06:15.679598 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-29 01:06:15.679604 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.679613 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:06:15.679619 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-29 01:06:15.679629 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.679635 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.679644 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.679650 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-29 01:06:15.679659 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.679664 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:06:15.679674 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:06:15.679680 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.679686 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-29 01:06:15.679695 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.679704 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 01:06:15.679710 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:06:15.679715 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.679725 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-29 01:06:15.679731 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-29 01:06:15.679741 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.679751 | orchestrator | skipping: [testbed-node-2] 2025-05-29 01:06:15.679814 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-29 01:06:15.679820 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.679830 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.679837 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.679851 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-29 01:06:15.679867 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.679877 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:06:15.679886 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:06:15.679900 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.679909 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-29 01:06:15.679924 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.679940 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 01:06:15.679949 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:06:15.679958 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.679988 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-29 01:06:15.679999 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-29 01:06:15.680014 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.680025 | orchestrator | skipping: [testbed-node-1] 2025-05-29 01:06:15.680031 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-29 01:06:15.680037 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.680046 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.680052 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.680061 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-29 01:06:15.680125 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.680132 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:06:15.680138 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:06:15.680144 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.680403 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-29 01:06:15.680418 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.680439 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 01:06:15.680449 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:06:15.680459 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.680473 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-29 01:06:15.680483 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-29 01:06:15.680492 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.680506 | orchestrator | skipping: [testbed-node-3] 2025-05-29 01:06:15.680520 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-29 01:06:15.680529 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.680538 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.680552 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.680562 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-29 01:06:15.680584 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.680594 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:06:15.680604 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:06:15.680613 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.680627 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-29 01:06:15.680637 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.680652 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 01:06:15.680666 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:06:15.680676 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.680686 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-29 01:06:15.680700 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-29 01:06:15.680711 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.680728 | orchestrator | skipping: [testbed-node-4] 2025-05-29 01:06:15.680742 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-29 01:06:15.680752 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.680761 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.680775 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.680784 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-29 01:06:15.680795 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.680883 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:06:15.680891 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:06:15.680897 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.680903 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-29 01:06:15.680914 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'pri2025-05-29 01:06:15 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:06:15.680925 | orchestrator | 2025-05-29 01:06:15 | INFO  | Task 222c4d4a-bc6d-472e-867f-3f5b90e5bb15 is in state STARTED 2025-05-29 01:06:15.680931 | orchestrator | 2025-05-29 01:06:15 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:06:15.680937 | orchestrator | vileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.680943 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 01:06:15.680958 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:06:15.680964 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.680970 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-29 01:06:15.680997 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-29 01:06:15.681008 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.681014 | orchestrator | skipping: [testbed-node-5] 2025-05-29 01:06:15.681020 | orchestrator | 2025-05-29 01:06:15.681025 | orchestrator | TASK [neutron : Copying over fwaas_driver.ini] ********************************* 2025-05-29 01:06:15.681032 | orchestrator | Thursday 29 May 2025 01:03:16 +0000 (0:00:02.155) 0:02:00.776 ********** 2025-05-29 01:06:15.681042 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-29 01:06:15.681048 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.681055 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.681065 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.681112 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-29 01:06:15.681120 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.681130 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:06:15.681137 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:06:15.681144 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.681151 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-29 01:06:15.681165 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.681171 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 01:06:15.681179 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:06:15.681189 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.681196 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-29 01:06:15.681210 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-29 01:06:15.681221 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.681227 | orchestrator | skipping: [testbed-node-1] 2025-05-29 01:06:15.681234 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-29 01:06:15.681244 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.681251 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.681261 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.681272 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-29 01:06:15.681279 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.681285 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:06:15.681295 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:06:15.681302 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.681309 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-29 01:06:15.681322 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.681329 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 01:06:15.681336 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:06:15.681346 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.681352 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-29 01:06:15.681361 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-29 01:06:15.681370 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.681375 | orchestrator | skipping: [testbed-node-2] 2025-05-29 01:06:15.681381 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-29 01:06:15.681390 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.681396 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.681405 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.681411 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-29 01:06:15.681434 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.681441 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:06:15.681451 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:06:15.681466 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.681476 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-29 01:06:15.681491 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.681505 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 01:06:15.682068 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:06:15.682089 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.682102 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-29 01:06:15.682116 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-29 01:06:15.682123 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.682129 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:06:15.682151 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-29 01:06:15.682158 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.682167 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.682178 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.682184 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-29 01:06:15.682190 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.682246 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:06:15.682254 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-29 01:06:15.682266 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:06:15.682277 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.682284 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.682324 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.682354 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.682364 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-29 01:06:15.682384 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-29 01:06:15.682577 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.682587 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:06:15.682593 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.682613 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:06:15.682657 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 01:06:15.682666 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.682678 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-29 01:06:15.682683 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:06:15.682688 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.682707 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.682712 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 01:06:15.682718 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:06:15.682729 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-29 01:06:15.682735 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.682740 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-29 01:06:15.682757 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-29 01:06:15.682763 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-29 01:06:15.682774 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.682779 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.682785 | orchestrator | skipping: [testbed-node-4] 2025-05-29 01:06:15.682852 | orchestrator | skipping: [testbed-node-5] 2025-05-29 01:06:15.682859 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-29 01:06:15.682877 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.682883 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.682894 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.682900 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-29 01:06:15.682905 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.682910 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:06:15.682915 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:06:15.682958 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.682969 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-29 01:06:15.682997 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.683003 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 01:06:15.683008 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:06:15.683013 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.683031 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-29 01:06:15.683041 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-29 01:06:15.683050 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.683055 | orchestrator | skipping: [testbed-node-3] 2025-05-29 01:06:15.683060 | orchestrator | 2025-05-29 01:06:15.683065 | orchestrator | TASK [neutron : Copying over metadata_agent.ini] ******************************* 2025-05-29 01:06:15.683070 | orchestrator | Thursday 29 May 2025 01:03:20 +0000 (0:00:03.654) 0:02:04.430 ********** 2025-05-29 01:06:15.683075 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:06:15.683080 | orchestrator | skipping: [testbed-node-1] 2025-05-29 01:06:15.683085 | orchestrator | skipping: [testbed-node-2] 2025-05-29 01:06:15.683090 | orchestrator | skipping: [testbed-node-3] 2025-05-29 01:06:15.683095 | orchestrator | skipping: [testbed-node-4] 2025-05-29 01:06:15.683099 | orchestrator | skipping: [testbed-node-5] 2025-05-29 01:06:15.683104 | orchestrator | 2025-05-29 01:06:15.683109 | orchestrator | TASK [neutron : Copying over neutron_ovn_metadata_agent.ini] ******************* 2025-05-29 01:06:15.683114 | orchestrator | Thursday 29 May 2025 01:03:24 +0000 (0:00:04.170) 0:02:08.601 ********** 2025-05-29 01:06:15.683119 | orchestrator | skipping: [testbed-node-1] 2025-05-29 01:06:15.683123 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:06:15.683128 | orchestrator | skipping: [testbed-node-2] 2025-05-29 01:06:15.683133 | orchestrator | changed: [testbed-node-3] 2025-05-29 01:06:15.683138 | orchestrator | changed: [testbed-node-4] 2025-05-29 01:06:15.683142 | orchestrator | changed: [testbed-node-5] 2025-05-29 01:06:15.683147 | orchestrator | 2025-05-29 01:06:15.683152 | orchestrator | TASK [neutron : Copying over neutron_ovn_vpn_agent.ini] ************************ 2025-05-29 01:06:15.683156 | orchestrator | Thursday 29 May 2025 01:03:30 +0000 (0:00:05.786) 0:02:14.388 ********** 2025-05-29 01:06:15.683161 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:06:15.683166 | orchestrator | skipping: [testbed-node-2] 2025-05-29 01:06:15.683171 | orchestrator | skipping: [testbed-node-1] 2025-05-29 01:06:15.683175 | orchestrator | skipping: [testbed-node-3] 2025-05-29 01:06:15.683180 | orchestrator | skipping: [testbed-node-4] 2025-05-29 01:06:15.683185 | orchestrator | skipping: [testbed-node-5] 2025-05-29 01:06:15.683284 | orchestrator | 2025-05-29 01:06:15.683289 | orchestrator | TASK [neutron : Copying over metering_agent.ini] ******************************* 2025-05-29 01:06:15.683294 | orchestrator | Thursday 29 May 2025 01:03:32 +0000 (0:00:02.606) 0:02:16.994 ********** 2025-05-29 01:06:15.683299 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:06:15.683309 | orchestrator | skipping: [testbed-node-2] 2025-05-29 01:06:15.683314 | orchestrator | skipping: [testbed-node-3] 2025-05-29 01:06:15.683318 | orchestrator | skipping: [testbed-node-1] 2025-05-29 01:06:15.683323 | orchestrator | skipping: [testbed-node-4] 2025-05-29 01:06:15.683328 | orchestrator | skipping: [testbed-node-5] 2025-05-29 01:06:15.683333 | orchestrator | 2025-05-29 01:06:15.683338 | orchestrator | TASK [neutron : Copying over ironic_neutron_agent.ini] ************************* 2025-05-29 01:06:15.683343 | orchestrator | Thursday 29 May 2025 01:03:36 +0000 (0:00:04.217) 0:02:21.211 ********** 2025-05-29 01:06:15.683348 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:06:15.683353 | orchestrator | skipping: [testbed-node-1] 2025-05-29 01:06:15.683358 | orchestrator | skipping: [testbed-node-3] 2025-05-29 01:06:15.683363 | orchestrator | skipping: [testbed-node-2] 2025-05-29 01:06:15.683368 | orchestrator | skipping: [testbed-node-4] 2025-05-29 01:06:15.683373 | orchestrator | skipping: [testbed-node-5] 2025-05-29 01:06:15.683378 | orchestrator | 2025-05-29 01:06:15.683382 | orchestrator | TASK [neutron : Copying over bgp_dragent.ini] ********************************** 2025-05-29 01:06:15.683388 | orchestrator | Thursday 29 May 2025 01:03:39 +0000 (0:00:02.168) 0:02:23.380 ********** 2025-05-29 01:06:15.683433 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:06:15.683441 | orchestrator | skipping: [testbed-node-1] 2025-05-29 01:06:15.683445 | orchestrator | skipping: [testbed-node-2] 2025-05-29 01:06:15.683450 | orchestrator | skipping: [testbed-node-4] 2025-05-29 01:06:15.683455 | orchestrator | skipping: [testbed-node-3] 2025-05-29 01:06:15.683460 | orchestrator | skipping: [testbed-node-5] 2025-05-29 01:06:15.683465 | orchestrator | 2025-05-29 01:06:15.683469 | orchestrator | TASK [neutron : Copying over ovn_agent.ini] ************************************ 2025-05-29 01:06:15.683476 | orchestrator | Thursday 29 May 2025 01:03:41 +0000 (0:00:02.276) 0:02:25.656 ********** 2025-05-29 01:06:15.683484 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:06:15.683491 | orchestrator | skipping: [testbed-node-1] 2025-05-29 01:06:15.683499 | orchestrator | skipping: [testbed-node-2] 2025-05-29 01:06:15.683507 | orchestrator | skipping: [testbed-node-4] 2025-05-29 01:06:15.683514 | orchestrator | skipping: [testbed-node-3] 2025-05-29 01:06:15.683522 | orchestrator | skipping: [testbed-node-5] 2025-05-29 01:06:15.683530 | orchestrator | 2025-05-29 01:06:15.683536 | orchestrator | TASK [neutron : Copying over nsx.ini] ****************************************** 2025-05-29 01:06:15.683544 | orchestrator | Thursday 29 May 2025 01:03:44 +0000 (0:00:03.399) 0:02:29.056 ********** 2025-05-29 01:06:15.683552 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:06:15.683559 | orchestrator | skipping: [testbed-node-1] 2025-05-29 01:06:15.683567 | orchestrator | skipping: [testbed-node-2] 2025-05-29 01:06:15.683575 | orchestrator | skipping: [testbed-node-3] 2025-05-29 01:06:15.683583 | orchestrator | skipping: [testbed-node-5] 2025-05-29 01:06:15.683591 | orchestrator | skipping: [testbed-node-4] 2025-05-29 01:06:15.683599 | orchestrator | 2025-05-29 01:06:15.683607 | orchestrator | TASK [neutron : Copy neutron-l3-agent-wrapper script] ************************** 2025-05-29 01:06:15.683615 | orchestrator | Thursday 29 May 2025 01:03:50 +0000 (0:00:05.321) 0:02:34.378 ********** 2025-05-29 01:06:15.683622 | orchestrator | skipping: [testbed-node-4] 2025-05-29 01:06:15.683629 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:06:15.683637 | orchestrator | skipping: [testbed-node-1] 2025-05-29 01:06:15.683645 | orchestrator | skipping: [testbed-node-2] 2025-05-29 01:06:15.683653 | orchestrator | skipping: [testbed-node-3] 2025-05-29 01:06:15.683661 | orchestrator | skipping: [testbed-node-5] 2025-05-29 01:06:15.683669 | orchestrator | 2025-05-29 01:06:15.683679 | orchestrator | TASK [neutron : Copying over extra ml2 plugins] ******************************** 2025-05-29 01:06:15.683684 | orchestrator | Thursday 29 May 2025 01:03:53 +0000 (0:00:03.699) 0:02:38.077 ********** 2025-05-29 01:06:15.683689 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:06:15.683694 | orchestrator | skipping: [testbed-node-2] 2025-05-29 01:06:15.683698 | orchestrator | skipping: [testbed-node-1] 2025-05-29 01:06:15.683703 | orchestrator | skipping: [testbed-node-3] 2025-05-29 01:06:15.683714 | orchestrator | skipping: [testbed-node-4] 2025-05-29 01:06:15.683719 | orchestrator | skipping: [testbed-node-5] 2025-05-29 01:06:15.683724 | orchestrator | 2025-05-29 01:06:15.683728 | orchestrator | TASK [neutron : Copying over neutron-tls-proxy.cfg] **************************** 2025-05-29 01:06:15.683733 | orchestrator | Thursday 29 May 2025 01:03:56 +0000 (0:00:02.503) 0:02:40.580 ********** 2025-05-29 01:06:15.683738 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-05-29 01:06:15.683744 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:06:15.683749 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-05-29 01:06:15.683754 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-05-29 01:06:15.683759 | orchestrator | skipping: [testbed-node-1] 2025-05-29 01:06:15.683764 | orchestrator | skipping: [testbed-node-4] 2025-05-29 01:06:15.683769 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-05-29 01:06:15.683774 | orchestrator | skipping: [testbed-node-5] 2025-05-29 01:06:15.683779 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-05-29 01:06:15.683783 | orchestrator | skipping: [testbed-node-2] 2025-05-29 01:06:15.683788 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/neutron/templates/neutron-tls-proxy.cfg.j2)  2025-05-29 01:06:15.683793 | orchestrator | skipping: [testbed-node-3] 2025-05-29 01:06:15.683798 | orchestrator | 2025-05-29 01:06:15.683803 | orchestrator | TASK [neutron : Copying over neutron_taas.conf] ******************************** 2025-05-29 01:06:15.683808 | orchestrator | Thursday 29 May 2025 01:03:59 +0000 (0:00:03.286) 0:02:43.867 ********** 2025-05-29 01:06:15.683813 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-29 01:06:15.683837 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.683843 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.683855 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.683861 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-29 01:06:15.683866 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-29 01:06:15.683883 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.683889 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.683902 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.683907 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:06:15.683912 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.683918 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-29 01:06:15.683934 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.683940 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:06:15.683948 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:06:15.683956 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:06:15.683961 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.683967 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-29 01:06:15.683972 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.684006 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.684016 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 01:06:15.684024 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-29 01:06:15.684030 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:06:15.684035 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.684040 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-29 01:06:15.684057 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-29 01:06:15.684067 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.684076 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.684082 | orchestrator | skipping: [testbed-node-2] 2025-05-29 01:06:15.684088 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 01:06:15.684094 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:06:15.684100 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.684117 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-29 01:06:15.684128 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-29 01:06:15.684139 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.684145 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-29 01:06:15.684151 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.684168 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.684178 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.684184 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:06:15.684193 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-29 01:06:15.684199 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.684205 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:06:15.684211 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:06:15.684229 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.684238 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-29 01:06:15.684246 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.684252 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 01:06:15.684258 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:06:15.684264 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.684281 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-29 01:06:15.684294 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-29 01:06:15.684302 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.684308 | orchestrator | skipping: [testbed-node-1] 2025-05-29 01:06:15.684314 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-29 01:06:15.684321 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.684327 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.684348 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.684357 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-29 01:06:15.684363 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.684369 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:06:15.684375 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:06:15.684381 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.684402 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-29 01:06:15.684408 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.684417 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 01:06:15.684423 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:06:15.684428 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.684434 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-29 01:06:15.684462 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-29 01:06:15.684468 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.684473 | orchestrator | skipping: [testbed-node-5] 2025-05-29 01:06:15.684481 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-29 01:06:15.684486 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.684491 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.684510 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.684516 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-29 01:06:15.684523 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.684529 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:06:15.684534 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:06:15.684539 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.684548 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-29 01:06:15.684565 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.684570 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 01:06:15.684576 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:06:15.684581 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.684586 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-29 01:06:15.684595 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-29 01:06:15.684610 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.684616 | orchestrator | skipping: [testbed-node-3] 2025-05-29 01:06:15.684641 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-29 01:06:15.684647 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.684652 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.684660 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.684677 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-29 01:06:15.684683 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.684691 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:06:15.684696 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:06:15.684701 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.684709 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-29 01:06:15.684725 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.684731 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 01:06:15.684736 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:06:15.684744 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.684749 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-29 01:06:15.684757 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-29 01:06:15.684773 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.684779 | orchestrator | skipping: [testbed-node-4] 2025-05-29 01:06:15.684784 | orchestrator | 2025-05-29 01:06:15.684788 | orchestrator | TASK [neutron : Check neutron containers] ************************************** 2025-05-29 01:06:15.684794 | orchestrator | Thursday 29 May 2025 01:04:03 +0000 (0:00:03.868) 0:02:47.735 ********** 2025-05-29 01:06:15.684799 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-29 01:06:15.684806 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.684815 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.684820 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.684836 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-29 01:06:15.684842 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.684847 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:06:15.684855 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:06:15.684863 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.684868 | orchestrator | changed: [testbed-node-2] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-29 01:06:15.684884 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.684890 | orchestrator | changed: [testbed-node-0] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-29 01:06:15.684898 | orchestrator | changed: [testbed-node-1] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}}) 2025-05-29 01:06:15.684907 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.684912 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.684919 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.684924 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.684933 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.685016 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.685023 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-29 01:06:15.685029 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.685038 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.685043 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.685051 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-29 01:06:15.685060 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.685065 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-29 01:06:15.685070 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:06:15.685078 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:06:15.685083 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:06:15.685090 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.685098 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-29 01:06:15.685103 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:06:15.685108 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.685113 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.685121 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 01:06:15.685126 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:06:15.685136 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.685141 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-29 01:06:15.685146 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.685151 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:06:15.685158 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-29 01:06:15.685163 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.685174 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:06:15.685179 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-29 01:06:15.685183 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.685188 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.685195 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 01:06:15.685200 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}})  2025-05-29 01:06:15.685210 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.685215 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:06:15.685220 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 01:06:15.685225 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.685230 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:06:15.685237 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-29 01:06:15.685245 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': True, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.685252 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-29 01:06:15.685257 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-29 01:06:15.685262 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-29 01:06:15.685269 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.685278 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.685286 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.685291 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.685296 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': False, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-29 01:06:15.685303 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-29 01:06:15.685311 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.685319 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.685324 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:06:15.685329 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:06:15.685333 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.685338 | orchestrator | changed: [testbed-node-3] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-29 01:06:15.685345 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.685353 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 01:06:15.685360 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:06:15.685365 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.685370 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.13:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-29 01:06:15.685375 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-29 01:06:15.685385 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.685390 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-server', 'value': {'container_name': 'neutron_server', 'image': 'registry.osism.tech/kolla/release/neutron-server:24.0.2.20241206', 'enabled': True, 'group': 'neutron-server', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:9696'], 'timeout': '30'}, 'haproxy': {'neutron_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696'}, 'neutron_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696'}}}})  2025-05-29 01:06:15.685397 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-openvswitch-agent', 'value': {'container_name': 'neutron_openvswitch_agent', 'image': 'registry.osism.tech/kolla/release/neutron-openvswitch-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-openvswitch-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run/openvswitch:/run/openvswitch:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-openvswitch-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.685402 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-linuxbridge-agent', 'value': {'container_name': 'neutron_linuxbridge_agent', 'image': 'registry.osism.tech/kolla/release/neutron-linuxbridge-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-linuxbridge-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-linuxbridge-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.685407 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-dhcp-agent', 'value': {'container_name': 'neutron_dhcp_agent', 'image': 'registry.osism.tech/kolla/release/neutron-dhcp-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-dhcp-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-dhcp-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-dhcp-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.685417 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-l3-agent', 'value': {'container_name': 'neutron_l3_agent', 'image': 'registry.osism.tech/kolla/release/neutron-l3-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'environment': {'KOLLA_LEGACY_IPTABLES': 'false'}, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-l3-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', "healthcheck_port 'neutron-l3-agent ' 5672"], 'timeout': '30'}}})  2025-05-29 01:06:15.685422 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-sriov-agent', 'value': {'container_name': 'neutron_sriov_agent', 'image': 'registry.osism.tech/kolla/release/neutron-sriov-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-sriov-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-sriov-nic-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.685429 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-mlnx-agent', 'value': {'container_name': 'neutron_mlnx_agent', 'image': 'registry.osism.tech/kolla/release/neutron-mlnx-agent:24.0.2.20241206', 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-mlnx-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:06:15.685434 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-eswitchd', 'value': {'container_name': 'neutron_eswitchd', 'image': 'registry.osism.tech/kolla/release/neutron-eswitchd:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-eswitchd/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/run/libvirt:/run/libvirt:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:06:15.685439 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metadata-agent', 'value': {'container_name': 'neutron_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-metadata-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.685444 | orchestrator | changed: [testbed-node-5] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-29 01:06:15.685451 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.685459 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 01:06:15.685464 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:06:15.685471 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.685476 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.15:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-29 01:06:15.685481 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-29 01:06:15.685489 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.685496 | orchestrator | changed: [testbed-node-4] => (item={'key': 'neutron-ovn-metadata-agent', 'value': {'container_name': 'neutron_ovn_metadata_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metadata-agent:24.0.2.20241206', 'privileged': True, 'enabled': True, 'host_in_groups': True, 'volumes': ['/etc/kolla/neutron-ovn-metadata-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', 'neutron_metadata_socket:/var/lib/neutron/kolla/', '/run/openvswitch:/run/openvswitch:shared', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-metadata-agent 6640'], 'timeout': '30'}}}) 2025-05-29 01:06:15.685503 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-bgp-dragent', 'value': {'container_name': 'neutron_bgp_dragent', 'image': 'registry.osism.tech/kolla/release/neutron-bgp-dragent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-bgp-dragent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-bgp-dragent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-bgp-dragent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.685508 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-infoblox-ipam-agent', 'value': {'container_name': 'neutron_infoblox_ipam_agent', 'image': 'registry.osism.tech/kolla/release/neutron-infoblox-ipam-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-infoblox-ipam-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-infoblox-ipam-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 01:06:15.685513 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-metering-agent', 'value': {'container_name': 'neutron_metering_agent', 'image': 'registry.osism.tech/kolla/release/neutron-metering-agent:24.0.2.20241206', 'privileged': True, 'enabled': False, 'group': 'neutron-metering-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-metering-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:06:15.685518 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'ironic-neutron-agent', 'value': {'container_name': 'ironic_neutron_agent', 'image': 'registry.osism.tech/kolla/release/ironic-neutron-agent:24.0.2.20241206', 'privileged': False, 'enabled': False, 'group': 'ironic-neutron-agent', 'host_in_groups': False, 'volumes': ['/etc/kolla/ironic-neutron-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port ironic-neutron-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.685525 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-tls-proxy', 'value': {'container_name': 'neutron_tls_proxy', 'group': 'neutron-server', 'host_in_groups': False, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/neutron-tls-proxy:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.14:9697'], 'timeout': '30'}, 'haproxy': {'neutron_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}, 'neutron_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9696', 'listen_port': '9696', 'tls_backend': 'yes'}}}})  2025-05-29 01:06:15.685533 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-agent', 'value': {'container_name': 'neutron_ovn_agent', 'group': 'neutron-ovn-agent', 'host_in_groups': True, 'enabled': False, 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-agent:24.0.2.20241206', 'volumes': ['/etc/kolla/neutron-ovn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port neutron-ovn-agent 6640'], 'timeout': '30'}}})  2025-05-29 01:06:15.685540 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'neutron-ovn-vpn-agent', 'value': {'container_name': 'neutron_ovn_vpn_agent', 'image': 'registry.osism.tech/dockerhub/kolla/release/neutron-ovn-vpn-agent:24.0.2.20241206', 'enabled': False, 'privileged': True, 'host_in_groups': False, 'volumes': ['/etc/kolla/neutron-ovn-vpn-agent/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/netns:/run/netns:shared', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port python 6642', '&&', 'healthcheck_port neutron-ovn-vpn-agent 5672'], 'timeout': '30'}}})  2025-05-29 01:06:15.685545 | orchestrator | 2025-05-29 01:06:15.685550 | orchestrator | TASK [neutron : include_tasks] ************************************************* 2025-05-29 01:06:15.685554 | orchestrator | Thursday 29 May 2025 01:04:09 +0000 (0:00:05.691) 0:02:53.427 ********** 2025-05-29 01:06:15.685559 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:06:15.685564 | orchestrator | skipping: [testbed-node-1] 2025-05-29 01:06:15.685568 | orchestrator | skipping: [testbed-node-2] 2025-05-29 01:06:15.685573 | orchestrator | skipping: [testbed-node-3] 2025-05-29 01:06:15.685577 | orchestrator | skipping: [testbed-node-4] 2025-05-29 01:06:15.685582 | orchestrator | skipping: [testbed-node-5] 2025-05-29 01:06:15.685587 | orchestrator | 2025-05-29 01:06:15.685591 | orchestrator | TASK [neutron : Creating Neutron database] ************************************* 2025-05-29 01:06:15.685596 | orchestrator | Thursday 29 May 2025 01:04:10 +0000 (0:00:01.388) 0:02:54.815 ********** 2025-05-29 01:06:15.685600 | orchestrator | changed: [testbed-node-0] 2025-05-29 01:06:15.685605 | orchestrator | 2025-05-29 01:06:15.685609 | orchestrator | TASK [neutron : Creating Neutron database user and setting permissions] ******** 2025-05-29 01:06:15.685614 | orchestrator | Thursday 29 May 2025 01:04:13 +0000 (0:00:03.039) 0:02:57.855 ********** 2025-05-29 01:06:15.685618 | orchestrator | changed: [testbed-node-0] 2025-05-29 01:06:15.685623 | orchestrator | 2025-05-29 01:06:15.685628 | orchestrator | TASK [neutron : Running Neutron bootstrap container] *************************** 2025-05-29 01:06:15.685632 | orchestrator | Thursday 29 May 2025 01:04:15 +0000 (0:00:02.421) 0:03:00.277 ********** 2025-05-29 01:06:15.685637 | orchestrator | changed: [testbed-node-0] 2025-05-29 01:06:15.685641 | orchestrator | 2025-05-29 01:06:15.685646 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-05-29 01:06:15.685655 | orchestrator | Thursday 29 May 2025 01:05:02 +0000 (0:00:46.508) 0:03:46.785 ********** 2025-05-29 01:06:15.685660 | orchestrator | 2025-05-29 01:06:15.685664 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-05-29 01:06:15.685669 | orchestrator | Thursday 29 May 2025 01:05:02 +0000 (0:00:00.112) 0:03:46.898 ********** 2025-05-29 01:06:15.685673 | orchestrator | 2025-05-29 01:06:15.685678 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-05-29 01:06:15.685683 | orchestrator | Thursday 29 May 2025 01:05:02 +0000 (0:00:00.387) 0:03:47.285 ********** 2025-05-29 01:06:15.685687 | orchestrator | 2025-05-29 01:06:15.685692 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-05-29 01:06:15.685696 | orchestrator | Thursday 29 May 2025 01:05:03 +0000 (0:00:00.107) 0:03:47.392 ********** 2025-05-29 01:06:15.685701 | orchestrator | 2025-05-29 01:06:15.685705 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-05-29 01:06:15.685710 | orchestrator | Thursday 29 May 2025 01:05:03 +0000 (0:00:00.073) 0:03:47.466 ********** 2025-05-29 01:06:15.685714 | orchestrator | 2025-05-29 01:06:15.685719 | orchestrator | TASK [neutron : Flush Handlers] ************************************************ 2025-05-29 01:06:15.685723 | orchestrator | Thursday 29 May 2025 01:05:03 +0000 (0:00:00.084) 0:03:47.551 ********** 2025-05-29 01:06:15.685728 | orchestrator | 2025-05-29 01:06:15.685732 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-server container] ******************* 2025-05-29 01:06:15.685737 | orchestrator | Thursday 29 May 2025 01:05:03 +0000 (0:00:00.231) 0:03:47.782 ********** 2025-05-29 01:06:15.685741 | orchestrator | changed: [testbed-node-0] 2025-05-29 01:06:15.685746 | orchestrator | changed: [testbed-node-1] 2025-05-29 01:06:15.685751 | orchestrator | changed: [testbed-node-2] 2025-05-29 01:06:15.685755 | orchestrator | 2025-05-29 01:06:15.685760 | orchestrator | RUNNING HANDLER [neutron : Restart neutron-ovn-metadata-agent container] ******* 2025-05-29 01:06:15.685764 | orchestrator | Thursday 29 May 2025 01:05:24 +0000 (0:00:20.791) 0:04:08.573 ********** 2025-05-29 01:06:15.685769 | orchestrator | changed: [testbed-node-4] 2025-05-29 01:06:15.685773 | orchestrator | changed: [testbed-node-3] 2025-05-29 01:06:15.685778 | orchestrator | changed: [testbed-node-5] 2025-05-29 01:06:15.685782 | orchestrator | 2025-05-29 01:06:15.685787 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-29 01:06:15.685794 | orchestrator | testbed-node-0 : ok=27  changed=16  unreachable=0 failed=0 skipped=32  rescued=0 ignored=0 2025-05-29 01:06:15.685799 | orchestrator | testbed-node-1 : ok=17  changed=9  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2025-05-29 01:06:15.685804 | orchestrator | testbed-node-2 : ok=17  changed=9  unreachable=0 failed=0 skipped=31  rescued=0 ignored=0 2025-05-29 01:06:15.685809 | orchestrator | testbed-node-3 : ok=15  changed=7  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2025-05-29 01:06:15.685813 | orchestrator | testbed-node-4 : ok=15  changed=7  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2025-05-29 01:06:15.685818 | orchestrator | testbed-node-5 : ok=15  changed=7  unreachable=0 failed=0 skipped=33  rescued=0 ignored=0 2025-05-29 01:06:15.685822 | orchestrator | 2025-05-29 01:06:15.685827 | orchestrator | 2025-05-29 01:06:15.685832 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-29 01:06:15.685836 | orchestrator | Thursday 29 May 2025 01:06:12 +0000 (0:00:48.578) 0:04:57.152 ********** 2025-05-29 01:06:15.685841 | orchestrator | =============================================================================== 2025-05-29 01:06:15.685845 | orchestrator | neutron : Restart neutron-ovn-metadata-agent container ----------------- 48.58s 2025-05-29 01:06:15.685852 | orchestrator | neutron : Running Neutron bootstrap container -------------------------- 46.51s 2025-05-29 01:06:15.685857 | orchestrator | neutron : Restart neutron-server container ----------------------------- 20.79s 2025-05-29 01:06:15.685865 | orchestrator | service-ks-register : neutron | Granting user roles --------------------- 8.35s 2025-05-29 01:06:15.685869 | orchestrator | neutron : Copying over neutron.conf ------------------------------------- 7.39s 2025-05-29 01:06:15.685874 | orchestrator | service-ks-register : neutron | Creating endpoints ---------------------- 6.42s 2025-05-29 01:06:15.685878 | orchestrator | neutron : Copying over neutron_ovn_metadata_agent.ini ------------------- 5.79s 2025-05-29 01:06:15.685883 | orchestrator | neutron : Check neutron containers -------------------------------------- 5.69s 2025-05-29 01:06:15.685887 | orchestrator | neutron : Copying over config.json files for services ------------------- 5.56s 2025-05-29 01:06:15.685892 | orchestrator | neutron : Creating TLS backend PEM File --------------------------------- 5.48s 2025-05-29 01:06:15.685896 | orchestrator | service-cert-copy : neutron | Copying over extra CA certificates -------- 5.37s 2025-05-29 01:06:15.685901 | orchestrator | neutron : Copying over nsx.ini ------------------------------------------ 5.32s 2025-05-29 01:06:15.685905 | orchestrator | neutron : Copying over ssh key ------------------------------------------ 4.81s 2025-05-29 01:06:15.685910 | orchestrator | neutron : Copying over neutron_vpnaas.conf ------------------------------ 4.78s 2025-05-29 01:06:15.685914 | orchestrator | service-cert-copy : neutron | Copying over backend internal TLS key ----- 4.45s 2025-05-29 01:06:15.685919 | orchestrator | neutron : Copying over metering_agent.ini ------------------------------- 4.22s 2025-05-29 01:06:15.685924 | orchestrator | neutron : Copying over metadata_agent.ini ------------------------------- 4.17s 2025-05-29 01:06:15.685928 | orchestrator | Load and persist kernel modules ----------------------------------------- 4.09s 2025-05-29 01:06:15.685933 | orchestrator | neutron : Copying over existing policy file ----------------------------- 4.00s 2025-05-29 01:06:15.685937 | orchestrator | neutron : Copying over mlnx_agent.ini ----------------------------------- 3.93s 2025-05-29 01:06:18.694944 | orchestrator | 2025-05-29 01:06:18 | INFO  | Task b0bf4f90-1ccd-4fbc-84c3-bf396bf3b873 is in state STARTED 2025-05-29 01:06:18.696649 | orchestrator | 2025-05-29 01:06:18 | INFO  | Task b04bb0db-c41d-423c-8f33-32174d7eae62 is in state STARTED 2025-05-29 01:06:18.697599 | orchestrator | 2025-05-29 01:06:18 | INFO  | Task 977ef4a5-1708-4276-b51b-8d515656c2d7 is in state STARTED 2025-05-29 01:06:18.697633 | orchestrator | 2025-05-29 01:06:18 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:06:18.697646 | orchestrator | 2025-05-29 01:06:18 | INFO  | Task 222c4d4a-bc6d-472e-867f-3f5b90e5bb15 is in state STARTED 2025-05-29 01:06:18.697658 | orchestrator | 2025-05-29 01:06:18 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:06:21.726345 | orchestrator | 2025-05-29 01:06:21 | INFO  | Task b0bf4f90-1ccd-4fbc-84c3-bf396bf3b873 is in state STARTED 2025-05-29 01:06:21.726445 | orchestrator | 2025-05-29 01:06:21 | INFO  | Task b04bb0db-c41d-423c-8f33-32174d7eae62 is in state STARTED 2025-05-29 01:06:21.726460 | orchestrator | 2025-05-29 01:06:21 | INFO  | Task 977ef4a5-1708-4276-b51b-8d515656c2d7 is in state STARTED 2025-05-29 01:06:21.726471 | orchestrator | 2025-05-29 01:06:21 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:06:21.726482 | orchestrator | 2025-05-29 01:06:21 | INFO  | Task 222c4d4a-bc6d-472e-867f-3f5b90e5bb15 is in state STARTED 2025-05-29 01:06:21.726494 | orchestrator | 2025-05-29 01:06:21 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:06:24.756077 | orchestrator | 2025-05-29 01:06:24 | INFO  | Task b0bf4f90-1ccd-4fbc-84c3-bf396bf3b873 is in state STARTED 2025-05-29 01:06:24.756196 | orchestrator | 2025-05-29 01:06:24 | INFO  | Task b04bb0db-c41d-423c-8f33-32174d7eae62 is in state STARTED 2025-05-29 01:06:24.757360 | orchestrator | 2025-05-29 01:06:24 | INFO  | Task 977ef4a5-1708-4276-b51b-8d515656c2d7 is in state STARTED 2025-05-29 01:06:24.757702 | orchestrator | 2025-05-29 01:06:24 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:06:24.758236 | orchestrator | 2025-05-29 01:06:24 | INFO  | Task 222c4d4a-bc6d-472e-867f-3f5b90e5bb15 is in state STARTED 2025-05-29 01:06:24.758261 | orchestrator | 2025-05-29 01:06:24 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:06:27.788213 | orchestrator | 2025-05-29 01:06:27 | INFO  | Task b0bf4f90-1ccd-4fbc-84c3-bf396bf3b873 is in state STARTED 2025-05-29 01:06:27.788344 | orchestrator | 2025-05-29 01:06:27 | INFO  | Task b04bb0db-c41d-423c-8f33-32174d7eae62 is in state STARTED 2025-05-29 01:06:27.788383 | orchestrator | 2025-05-29 01:06:27 | INFO  | Task 977ef4a5-1708-4276-b51b-8d515656c2d7 is in state STARTED 2025-05-29 01:06:27.788804 | orchestrator | 2025-05-29 01:06:27 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:06:27.789224 | orchestrator | 2025-05-29 01:06:27 | INFO  | Task 222c4d4a-bc6d-472e-867f-3f5b90e5bb15 is in state STARTED 2025-05-29 01:06:27.789289 | orchestrator | 2025-05-29 01:06:27 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:06:30.814715 | orchestrator | 2025-05-29 01:06:30 | INFO  | Task b0bf4f90-1ccd-4fbc-84c3-bf396bf3b873 is in state STARTED 2025-05-29 01:06:30.814835 | orchestrator | 2025-05-29 01:06:30 | INFO  | Task b04bb0db-c41d-423c-8f33-32174d7eae62 is in state STARTED 2025-05-29 01:06:30.816600 | orchestrator | 2025-05-29 01:06:30 | INFO  | Task 977ef4a5-1708-4276-b51b-8d515656c2d7 is in state STARTED 2025-05-29 01:06:30.816917 | orchestrator | 2025-05-29 01:06:30 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:06:30.817452 | orchestrator | 2025-05-29 01:06:30 | INFO  | Task 222c4d4a-bc6d-472e-867f-3f5b90e5bb15 is in state STARTED 2025-05-29 01:06:30.817552 | orchestrator | 2025-05-29 01:06:30 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:06:33.845687 | orchestrator | 2025-05-29 01:06:33 | INFO  | Task b0bf4f90-1ccd-4fbc-84c3-bf396bf3b873 is in state STARTED 2025-05-29 01:06:33.845930 | orchestrator | 2025-05-29 01:06:33 | INFO  | Task b04bb0db-c41d-423c-8f33-32174d7eae62 is in state STARTED 2025-05-29 01:06:33.846735 | orchestrator | 2025-05-29 01:06:33 | INFO  | Task 977ef4a5-1708-4276-b51b-8d515656c2d7 is in state STARTED 2025-05-29 01:06:33.849053 | orchestrator | 2025-05-29 01:06:33 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:06:33.849610 | orchestrator | 2025-05-29 01:06:33 | INFO  | Task 222c4d4a-bc6d-472e-867f-3f5b90e5bb15 is in state STARTED 2025-05-29 01:06:33.849780 | orchestrator | 2025-05-29 01:06:33 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:06:36.875104 | orchestrator | 2025-05-29 01:06:36 | INFO  | Task b0bf4f90-1ccd-4fbc-84c3-bf396bf3b873 is in state STARTED 2025-05-29 01:06:36.875231 | orchestrator | 2025-05-29 01:06:36 | INFO  | Task b04bb0db-c41d-423c-8f33-32174d7eae62 is in state STARTED 2025-05-29 01:06:36.875248 | orchestrator | 2025-05-29 01:06:36 | INFO  | Task 977ef4a5-1708-4276-b51b-8d515656c2d7 is in state STARTED 2025-05-29 01:06:36.875567 | orchestrator | 2025-05-29 01:06:36 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:06:36.876113 | orchestrator | 2025-05-29 01:06:36 | INFO  | Task 222c4d4a-bc6d-472e-867f-3f5b90e5bb15 is in state STARTED 2025-05-29 01:06:36.876229 | orchestrator | 2025-05-29 01:06:36 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:06:39.900957 | orchestrator | 2025-05-29 01:06:39 | INFO  | Task b0bf4f90-1ccd-4fbc-84c3-bf396bf3b873 is in state STARTED 2025-05-29 01:06:39.901426 | orchestrator | 2025-05-29 01:06:39 | INFO  | Task b04bb0db-c41d-423c-8f33-32174d7eae62 is in state STARTED 2025-05-29 01:06:39.902149 | orchestrator | 2025-05-29 01:06:39 | INFO  | Task 977ef4a5-1708-4276-b51b-8d515656c2d7 is in state STARTED 2025-05-29 01:06:39.903533 | orchestrator | 2025-05-29 01:06:39 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:06:39.904548 | orchestrator | 2025-05-29 01:06:39 | INFO  | Task 222c4d4a-bc6d-472e-867f-3f5b90e5bb15 is in state STARTED 2025-05-29 01:06:39.904584 | orchestrator | 2025-05-29 01:06:39 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:06:42.944241 | orchestrator | 2025-05-29 01:06:42 | INFO  | Task b0bf4f90-1ccd-4fbc-84c3-bf396bf3b873 is in state STARTED 2025-05-29 01:06:42.944346 | orchestrator | 2025-05-29 01:06:42 | INFO  | Task b04bb0db-c41d-423c-8f33-32174d7eae62 is in state STARTED 2025-05-29 01:06:42.944362 | orchestrator | 2025-05-29 01:06:42 | INFO  | Task 977ef4a5-1708-4276-b51b-8d515656c2d7 is in state STARTED 2025-05-29 01:06:42.944740 | orchestrator | 2025-05-29 01:06:42 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:06:42.945274 | orchestrator | 2025-05-29 01:06:42 | INFO  | Task 222c4d4a-bc6d-472e-867f-3f5b90e5bb15 is in state STARTED 2025-05-29 01:06:42.945296 | orchestrator | 2025-05-29 01:06:42 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:06:45.973184 | orchestrator | 2025-05-29 01:06:45 | INFO  | Task b0bf4f90-1ccd-4fbc-84c3-bf396bf3b873 is in state STARTED 2025-05-29 01:06:45.973290 | orchestrator | 2025-05-29 01:06:45 | INFO  | Task b04bb0db-c41d-423c-8f33-32174d7eae62 is in state STARTED 2025-05-29 01:06:45.973449 | orchestrator | 2025-05-29 01:06:45 | INFO  | Task 977ef4a5-1708-4276-b51b-8d515656c2d7 is in state STARTED 2025-05-29 01:06:45.973912 | orchestrator | 2025-05-29 01:06:45 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:06:45.976455 | orchestrator | 2025-05-29 01:06:45 | INFO  | Task 222c4d4a-bc6d-472e-867f-3f5b90e5bb15 is in state STARTED 2025-05-29 01:06:45.976499 | orchestrator | 2025-05-29 01:06:45 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:06:49.006322 | orchestrator | 2025-05-29 01:06:49 | INFO  | Task b0bf4f90-1ccd-4fbc-84c3-bf396bf3b873 is in state STARTED 2025-05-29 01:06:49.006412 | orchestrator | 2025-05-29 01:06:49 | INFO  | Task b04bb0db-c41d-423c-8f33-32174d7eae62 is in state STARTED 2025-05-29 01:06:49.006925 | orchestrator | 2025-05-29 01:06:49 | INFO  | Task 977ef4a5-1708-4276-b51b-8d515656c2d7 is in state STARTED 2025-05-29 01:06:49.007481 | orchestrator | 2025-05-29 01:06:49 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:06:49.008036 | orchestrator | 2025-05-29 01:06:49 | INFO  | Task 222c4d4a-bc6d-472e-867f-3f5b90e5bb15 is in state STARTED 2025-05-29 01:06:49.008060 | orchestrator | 2025-05-29 01:06:49 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:06:52.045979 | orchestrator | 2025-05-29 01:06:52 | INFO  | Task b0bf4f90-1ccd-4fbc-84c3-bf396bf3b873 is in state STARTED 2025-05-29 01:06:52.046155 | orchestrator | 2025-05-29 01:06:52 | INFO  | Task b04bb0db-c41d-423c-8f33-32174d7eae62 is in state STARTED 2025-05-29 01:06:52.046474 | orchestrator | 2025-05-29 01:06:52 | INFO  | Task 977ef4a5-1708-4276-b51b-8d515656c2d7 is in state STARTED 2025-05-29 01:06:52.047982 | orchestrator | 2025-05-29 01:06:52 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:06:52.048413 | orchestrator | 2025-05-29 01:06:52 | INFO  | Task 222c4d4a-bc6d-472e-867f-3f5b90e5bb15 is in state STARTED 2025-05-29 01:06:52.048460 | orchestrator | 2025-05-29 01:06:52 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:06:55.087286 | orchestrator | 2025-05-29 01:06:55 | INFO  | Task b0bf4f90-1ccd-4fbc-84c3-bf396bf3b873 is in state STARTED 2025-05-29 01:06:55.087873 | orchestrator | 2025-05-29 01:06:55 | INFO  | Task b04bb0db-c41d-423c-8f33-32174d7eae62 is in state STARTED 2025-05-29 01:06:55.088802 | orchestrator | 2025-05-29 01:06:55 | INFO  | Task 977ef4a5-1708-4276-b51b-8d515656c2d7 is in state STARTED 2025-05-29 01:06:55.089775 | orchestrator | 2025-05-29 01:06:55 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:06:55.090669 | orchestrator | 2025-05-29 01:06:55 | INFO  | Task 222c4d4a-bc6d-472e-867f-3f5b90e5bb15 is in state STARTED 2025-05-29 01:06:55.090705 | orchestrator | 2025-05-29 01:06:55 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:06:58.139531 | orchestrator | 2025-05-29 01:06:58 | INFO  | Task b0bf4f90-1ccd-4fbc-84c3-bf396bf3b873 is in state STARTED 2025-05-29 01:06:58.139913 | orchestrator | 2025-05-29 01:06:58 | INFO  | Task b04bb0db-c41d-423c-8f33-32174d7eae62 is in state STARTED 2025-05-29 01:06:58.140647 | orchestrator | 2025-05-29 01:06:58 | INFO  | Task 977ef4a5-1708-4276-b51b-8d515656c2d7 is in state STARTED 2025-05-29 01:06:58.141114 | orchestrator | 2025-05-29 01:06:58 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:06:58.141773 | orchestrator | 2025-05-29 01:06:58 | INFO  | Task 222c4d4a-bc6d-472e-867f-3f5b90e5bb15 is in state STARTED 2025-05-29 01:06:58.141799 | orchestrator | 2025-05-29 01:06:58 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:07:01.174265 | orchestrator | 2025-05-29 01:07:01 | INFO  | Task b0bf4f90-1ccd-4fbc-84c3-bf396bf3b873 is in state STARTED 2025-05-29 01:07:01.175040 | orchestrator | 2025-05-29 01:07:01 | INFO  | Task b04bb0db-c41d-423c-8f33-32174d7eae62 is in state STARTED 2025-05-29 01:07:01.175364 | orchestrator | 2025-05-29 01:07:01 | INFO  | Task 977ef4a5-1708-4276-b51b-8d515656c2d7 is in state STARTED 2025-05-29 01:07:01.175881 | orchestrator | 2025-05-29 01:07:01 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:07:01.176446 | orchestrator | 2025-05-29 01:07:01 | INFO  | Task 222c4d4a-bc6d-472e-867f-3f5b90e5bb15 is in state STARTED 2025-05-29 01:07:01.176579 | orchestrator | 2025-05-29 01:07:01 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:07:04.212782 | orchestrator | 2025-05-29 01:07:04 | INFO  | Task b0bf4f90-1ccd-4fbc-84c3-bf396bf3b873 is in state STARTED 2025-05-29 01:07:04.212918 | orchestrator | 2025-05-29 01:07:04 | INFO  | Task b04bb0db-c41d-423c-8f33-32174d7eae62 is in state STARTED 2025-05-29 01:07:04.213361 | orchestrator | 2025-05-29 01:07:04 | INFO  | Task 977ef4a5-1708-4276-b51b-8d515656c2d7 is in state STARTED 2025-05-29 01:07:04.214512 | orchestrator | 2025-05-29 01:07:04 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:07:04.215202 | orchestrator | 2025-05-29 01:07:04 | INFO  | Task 222c4d4a-bc6d-472e-867f-3f5b90e5bb15 is in state STARTED 2025-05-29 01:07:04.215242 | orchestrator | 2025-05-29 01:07:04 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:07:07.260472 | orchestrator | 2025-05-29 01:07:07 | INFO  | Task b0bf4f90-1ccd-4fbc-84c3-bf396bf3b873 is in state STARTED 2025-05-29 01:07:07.261022 | orchestrator | 2025-05-29 01:07:07 | INFO  | Task b04bb0db-c41d-423c-8f33-32174d7eae62 is in state STARTED 2025-05-29 01:07:07.262280 | orchestrator | 2025-05-29 01:07:07 | INFO  | Task 977ef4a5-1708-4276-b51b-8d515656c2d7 is in state STARTED 2025-05-29 01:07:07.263531 | orchestrator | 2025-05-29 01:07:07 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:07:07.264434 | orchestrator | 2025-05-29 01:07:07 | INFO  | Task 222c4d4a-bc6d-472e-867f-3f5b90e5bb15 is in state STARTED 2025-05-29 01:07:07.264463 | orchestrator | 2025-05-29 01:07:07 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:07:10.331026 | orchestrator | 2025-05-29 01:07:10 | INFO  | Task b0bf4f90-1ccd-4fbc-84c3-bf396bf3b873 is in state STARTED 2025-05-29 01:07:10.331368 | orchestrator | 2025-05-29 01:07:10 | INFO  | Task b04bb0db-c41d-423c-8f33-32174d7eae62 is in state STARTED 2025-05-29 01:07:10.331813 | orchestrator | 2025-05-29 01:07:10 | INFO  | Task 977ef4a5-1708-4276-b51b-8d515656c2d7 is in state STARTED 2025-05-29 01:07:10.332413 | orchestrator | 2025-05-29 01:07:10 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:07:10.334272 | orchestrator | 2025-05-29 01:07:10 | INFO  | Task 222c4d4a-bc6d-472e-867f-3f5b90e5bb15 is in state STARTED 2025-05-29 01:07:10.334456 | orchestrator | 2025-05-29 01:07:10 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:07:13.371290 | orchestrator | 2025-05-29 01:07:13 | INFO  | Task b0bf4f90-1ccd-4fbc-84c3-bf396bf3b873 is in state STARTED 2025-05-29 01:07:13.372607 | orchestrator | 2025-05-29 01:07:13 | INFO  | Task b04bb0db-c41d-423c-8f33-32174d7eae62 is in state STARTED 2025-05-29 01:07:13.377036 | orchestrator | 2025-05-29 01:07:13 | INFO  | Task 977ef4a5-1708-4276-b51b-8d515656c2d7 is in state STARTED 2025-05-29 01:07:13.378889 | orchestrator | 2025-05-29 01:07:13 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:07:13.380599 | orchestrator | 2025-05-29 01:07:13 | INFO  | Task 222c4d4a-bc6d-472e-867f-3f5b90e5bb15 is in state STARTED 2025-05-29 01:07:13.381090 | orchestrator | 2025-05-29 01:07:13 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:07:16.424547 | orchestrator | 2025-05-29 01:07:16 | INFO  | Task b0bf4f90-1ccd-4fbc-84c3-bf396bf3b873 is in state STARTED 2025-05-29 01:07:16.425734 | orchestrator | 2025-05-29 01:07:16 | INFO  | Task b04bb0db-c41d-423c-8f33-32174d7eae62 is in state STARTED 2025-05-29 01:07:16.426341 | orchestrator | 2025-05-29 01:07:16 | INFO  | Task 977ef4a5-1708-4276-b51b-8d515656c2d7 is in state STARTED 2025-05-29 01:07:16.427542 | orchestrator | 2025-05-29 01:07:16 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:07:16.429787 | orchestrator | 2025-05-29 01:07:16 | INFO  | Task 222c4d4a-bc6d-472e-867f-3f5b90e5bb15 is in state STARTED 2025-05-29 01:07:16.429815 | orchestrator | 2025-05-29 01:07:16 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:07:19.459325 | orchestrator | 2025-05-29 01:07:19 | INFO  | Task b0bf4f90-1ccd-4fbc-84c3-bf396bf3b873 is in state STARTED 2025-05-29 01:07:19.459994 | orchestrator | 2025-05-29 01:07:19 | INFO  | Task b04bb0db-c41d-423c-8f33-32174d7eae62 is in state STARTED 2025-05-29 01:07:19.460028 | orchestrator | 2025-05-29 01:07:19 | INFO  | Task 977ef4a5-1708-4276-b51b-8d515656c2d7 is in state STARTED 2025-05-29 01:07:19.460622 | orchestrator | 2025-05-29 01:07:19 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:07:19.461785 | orchestrator | 2025-05-29 01:07:19 | INFO  | Task 222c4d4a-bc6d-472e-867f-3f5b90e5bb15 is in state STARTED 2025-05-29 01:07:19.461888 | orchestrator | 2025-05-29 01:07:19 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:07:22.500701 | orchestrator | 2025-05-29 01:07:22 | INFO  | Task b0bf4f90-1ccd-4fbc-84c3-bf396bf3b873 is in state STARTED 2025-05-29 01:07:22.502869 | orchestrator | 2025-05-29 01:07:22 | INFO  | Task b04bb0db-c41d-423c-8f33-32174d7eae62 is in state STARTED 2025-05-29 01:07:22.505078 | orchestrator | 2025-05-29 01:07:22 | INFO  | Task 977ef4a5-1708-4276-b51b-8d515656c2d7 is in state STARTED 2025-05-29 01:07:22.506148 | orchestrator | 2025-05-29 01:07:22 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:07:22.507089 | orchestrator | 2025-05-29 01:07:22 | INFO  | Task 222c4d4a-bc6d-472e-867f-3f5b90e5bb15 is in state STARTED 2025-05-29 01:07:22.507114 | orchestrator | 2025-05-29 01:07:22 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:07:25.556978 | orchestrator | 2025-05-29 01:07:25 | INFO  | Task b0bf4f90-1ccd-4fbc-84c3-bf396bf3b873 is in state STARTED 2025-05-29 01:07:25.561112 | orchestrator | 2025-05-29 01:07:25 | INFO  | Task b04bb0db-c41d-423c-8f33-32174d7eae62 is in state STARTED 2025-05-29 01:07:25.561152 | orchestrator | 2025-05-29 01:07:25 | INFO  | Task 977ef4a5-1708-4276-b51b-8d515656c2d7 is in state STARTED 2025-05-29 01:07:25.561166 | orchestrator | 2025-05-29 01:07:25 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:07:25.561178 | orchestrator | 2025-05-29 01:07:25 | INFO  | Task 222c4d4a-bc6d-472e-867f-3f5b90e5bb15 is in state STARTED 2025-05-29 01:07:25.561191 | orchestrator | 2025-05-29 01:07:25 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:07:28.610414 | orchestrator | 2025-05-29 01:07:28 | INFO  | Task b0bf4f90-1ccd-4fbc-84c3-bf396bf3b873 is in state STARTED 2025-05-29 01:07:28.610524 | orchestrator | 2025-05-29 01:07:28 | INFO  | Task b04bb0db-c41d-423c-8f33-32174d7eae62 is in state STARTED 2025-05-29 01:07:28.610785 | orchestrator | 2025-05-29 01:07:28 | INFO  | Task 977ef4a5-1708-4276-b51b-8d515656c2d7 is in state STARTED 2025-05-29 01:07:28.611172 | orchestrator | 2025-05-29 01:07:28 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:07:28.611800 | orchestrator | 2025-05-29 01:07:28 | INFO  | Task 222c4d4a-bc6d-472e-867f-3f5b90e5bb15 is in state STARTED 2025-05-29 01:07:28.611823 | orchestrator | 2025-05-29 01:07:28 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:07:31.640022 | orchestrator | 2025-05-29 01:07:31 | INFO  | Task b0bf4f90-1ccd-4fbc-84c3-bf396bf3b873 is in state STARTED 2025-05-29 01:07:31.640236 | orchestrator | 2025-05-29 01:07:31 | INFO  | Task b04bb0db-c41d-423c-8f33-32174d7eae62 is in state STARTED 2025-05-29 01:07:31.640731 | orchestrator | 2025-05-29 01:07:31 | INFO  | Task 977ef4a5-1708-4276-b51b-8d515656c2d7 is in state STARTED 2025-05-29 01:07:31.641200 | orchestrator | 2025-05-29 01:07:31 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:07:31.641742 | orchestrator | 2025-05-29 01:07:31 | INFO  | Task 222c4d4a-bc6d-472e-867f-3f5b90e5bb15 is in state STARTED 2025-05-29 01:07:31.641772 | orchestrator | 2025-05-29 01:07:31 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:07:34.687088 | orchestrator | 2025-05-29 01:07:34 | INFO  | Task b0bf4f90-1ccd-4fbc-84c3-bf396bf3b873 is in state STARTED 2025-05-29 01:07:34.692894 | orchestrator | 2025-05-29 01:07:34 | INFO  | Task b04bb0db-c41d-423c-8f33-32174d7eae62 is in state STARTED 2025-05-29 01:07:34.694213 | orchestrator | 2025-05-29 01:07:34 | INFO  | Task 977ef4a5-1708-4276-b51b-8d515656c2d7 is in state STARTED 2025-05-29 01:07:34.695672 | orchestrator | 2025-05-29 01:07:34 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:07:34.698174 | orchestrator | 2025-05-29 01:07:34 | INFO  | Task 222c4d4a-bc6d-472e-867f-3f5b90e5bb15 is in state STARTED 2025-05-29 01:07:34.698361 | orchestrator | 2025-05-29 01:07:34 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:07:37.740655 | orchestrator | 2025-05-29 01:07:37 | INFO  | Task b0bf4f90-1ccd-4fbc-84c3-bf396bf3b873 is in state STARTED 2025-05-29 01:07:37.740761 | orchestrator | 2025-05-29 01:07:37 | INFO  | Task b04bb0db-c41d-423c-8f33-32174d7eae62 is in state STARTED 2025-05-29 01:07:37.742240 | orchestrator | 2025-05-29 01:07:37 | INFO  | Task 977ef4a5-1708-4276-b51b-8d515656c2d7 is in state STARTED 2025-05-29 01:07:37.746408 | orchestrator | 2025-05-29 01:07:37 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:07:37.749418 | orchestrator | 2025-05-29 01:07:37 | INFO  | Task 222c4d4a-bc6d-472e-867f-3f5b90e5bb15 is in state STARTED 2025-05-29 01:07:37.749871 | orchestrator | 2025-05-29 01:07:37 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:07:40.810127 | orchestrator | 2025-05-29 01:07:40 | INFO  | Task b0bf4f90-1ccd-4fbc-84c3-bf396bf3b873 is in state STARTED 2025-05-29 01:07:40.811451 | orchestrator | 2025-05-29 01:07:40 | INFO  | Task b04bb0db-c41d-423c-8f33-32174d7eae62 is in state STARTED 2025-05-29 01:07:40.813068 | orchestrator | 2025-05-29 01:07:40 | INFO  | Task 977ef4a5-1708-4276-b51b-8d515656c2d7 is in state STARTED 2025-05-29 01:07:40.819215 | orchestrator | 2025-05-29 01:07:40 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:07:40.822791 | orchestrator | 2025-05-29 01:07:40 | INFO  | Task 222c4d4a-bc6d-472e-867f-3f5b90e5bb15 is in state STARTED 2025-05-29 01:07:40.822816 | orchestrator | 2025-05-29 01:07:40 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:07:43.882237 | orchestrator | 2025-05-29 01:07:43 | INFO  | Task b0bf4f90-1ccd-4fbc-84c3-bf396bf3b873 is in state STARTED 2025-05-29 01:07:43.883849 | orchestrator | 2025-05-29 01:07:43 | INFO  | Task b04bb0db-c41d-423c-8f33-32174d7eae62 is in state STARTED 2025-05-29 01:07:43.885784 | orchestrator | 2025-05-29 01:07:43 | INFO  | Task 977ef4a5-1708-4276-b51b-8d515656c2d7 is in state STARTED 2025-05-29 01:07:43.889004 | orchestrator | 2025-05-29 01:07:43 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:07:43.891999 | orchestrator | 2025-05-29 01:07:43 | INFO  | Task 222c4d4a-bc6d-472e-867f-3f5b90e5bb15 is in state STARTED 2025-05-29 01:07:43.892116 | orchestrator | 2025-05-29 01:07:43 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:07:46.941692 | orchestrator | 2025-05-29 01:07:46 | INFO  | Task b0bf4f90-1ccd-4fbc-84c3-bf396bf3b873 is in state STARTED 2025-05-29 01:07:46.943406 | orchestrator | 2025-05-29 01:07:46 | INFO  | Task b04bb0db-c41d-423c-8f33-32174d7eae62 is in state STARTED 2025-05-29 01:07:46.944671 | orchestrator | 2025-05-29 01:07:46 | INFO  | Task 977ef4a5-1708-4276-b51b-8d515656c2d7 is in state STARTED 2025-05-29 01:07:46.947378 | orchestrator | 2025-05-29 01:07:46 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:07:46.949649 | orchestrator | 2025-05-29 01:07:46 | INFO  | Task 222c4d4a-bc6d-472e-867f-3f5b90e5bb15 is in state STARTED 2025-05-29 01:07:46.949673 | orchestrator | 2025-05-29 01:07:46 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:07:49.994562 | orchestrator | 2025-05-29 01:07:49 | INFO  | Task b0bf4f90-1ccd-4fbc-84c3-bf396bf3b873 is in state STARTED 2025-05-29 01:07:49.995407 | orchestrator | 2025-05-29 01:07:49 | INFO  | Task b04bb0db-c41d-423c-8f33-32174d7eae62 is in state STARTED 2025-05-29 01:07:49.996019 | orchestrator | 2025-05-29 01:07:49 | INFO  | Task 977ef4a5-1708-4276-b51b-8d515656c2d7 is in state STARTED 2025-05-29 01:07:49.997261 | orchestrator | 2025-05-29 01:07:49 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:07:50.005789 | orchestrator | 2025-05-29 01:07:50 | INFO  | Task 222c4d4a-bc6d-472e-867f-3f5b90e5bb15 is in state STARTED 2025-05-29 01:07:50.005870 | orchestrator | 2025-05-29 01:07:50 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:07:53.049626 | orchestrator | 2025-05-29 01:07:53 | INFO  | Task b0bf4f90-1ccd-4fbc-84c3-bf396bf3b873 is in state STARTED 2025-05-29 01:07:53.050225 | orchestrator | 2025-05-29 01:07:53 | INFO  | Task b04bb0db-c41d-423c-8f33-32174d7eae62 is in state STARTED 2025-05-29 01:07:53.051296 | orchestrator | 2025-05-29 01:07:53 | INFO  | Task 977ef4a5-1708-4276-b51b-8d515656c2d7 is in state STARTED 2025-05-29 01:07:53.052060 | orchestrator | 2025-05-29 01:07:53 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:07:53.053062 | orchestrator | 2025-05-29 01:07:53 | INFO  | Task 222c4d4a-bc6d-472e-867f-3f5b90e5bb15 is in state STARTED 2025-05-29 01:07:53.053087 | orchestrator | 2025-05-29 01:07:53 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:07:56.091119 | orchestrator | 2025-05-29 01:07:56 | INFO  | Task b0bf4f90-1ccd-4fbc-84c3-bf396bf3b873 is in state STARTED 2025-05-29 01:07:56.092414 | orchestrator | 2025-05-29 01:07:56 | INFO  | Task b04bb0db-c41d-423c-8f33-32174d7eae62 is in state STARTED 2025-05-29 01:07:56.095221 | orchestrator | 2025-05-29 01:07:56 | INFO  | Task 977ef4a5-1708-4276-b51b-8d515656c2d7 is in state STARTED 2025-05-29 01:07:56.097602 | orchestrator | 2025-05-29 01:07:56 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:07:56.099925 | orchestrator | 2025-05-29 01:07:56 | INFO  | Task 222c4d4a-bc6d-472e-867f-3f5b90e5bb15 is in state STARTED 2025-05-29 01:07:56.099962 | orchestrator | 2025-05-29 01:07:56 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:07:59.151620 | orchestrator | 2025-05-29 01:07:59 | INFO  | Task b0bf4f90-1ccd-4fbc-84c3-bf396bf3b873 is in state STARTED 2025-05-29 01:07:59.153842 | orchestrator | 2025-05-29 01:07:59 | INFO  | Task b04bb0db-c41d-423c-8f33-32174d7eae62 is in state STARTED 2025-05-29 01:07:59.155660 | orchestrator | 2025-05-29 01:07:59 | INFO  | Task 977ef4a5-1708-4276-b51b-8d515656c2d7 is in state STARTED 2025-05-29 01:07:59.158112 | orchestrator | 2025-05-29 01:07:59 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:07:59.160009 | orchestrator | 2025-05-29 01:07:59 | INFO  | Task 222c4d4a-bc6d-472e-867f-3f5b90e5bb15 is in state STARTED 2025-05-29 01:07:59.160079 | orchestrator | 2025-05-29 01:07:59 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:08:02.211504 | orchestrator | 2025-05-29 01:08:02 | INFO  | Task b0bf4f90-1ccd-4fbc-84c3-bf396bf3b873 is in state STARTED 2025-05-29 01:08:02.212044 | orchestrator | 2025-05-29 01:08:02 | INFO  | Task b04bb0db-c41d-423c-8f33-32174d7eae62 is in state STARTED 2025-05-29 01:08:02.212473 | orchestrator | 2025-05-29 01:08:02 | INFO  | Task 977ef4a5-1708-4276-b51b-8d515656c2d7 is in state STARTED 2025-05-29 01:08:02.213266 | orchestrator | 2025-05-29 01:08:02 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:08:02.215905 | orchestrator | 2025-05-29 01:08:02 | INFO  | Task 222c4d4a-bc6d-472e-867f-3f5b90e5bb15 is in state STARTED 2025-05-29 01:08:02.215944 | orchestrator | 2025-05-29 01:08:02 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:08:05.265775 | orchestrator | 2025-05-29 01:08:05 | INFO  | Task b0bf4f90-1ccd-4fbc-84c3-bf396bf3b873 is in state STARTED 2025-05-29 01:08:05.267866 | orchestrator | 2025-05-29 01:08:05 | INFO  | Task b04bb0db-c41d-423c-8f33-32174d7eae62 is in state STARTED 2025-05-29 01:08:05.269732 | orchestrator | 2025-05-29 01:08:05 | INFO  | Task 977ef4a5-1708-4276-b51b-8d515656c2d7 is in state STARTED 2025-05-29 01:08:05.273953 | orchestrator | 2025-05-29 01:08:05 | INFO  | Task 805e288f-1c93-45ff-b9d4-966879aaf853 is in state STARTED 2025-05-29 01:08:05.276234 | orchestrator | 2025-05-29 01:08:05 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:08:05.279476 | orchestrator | 2025-05-29 01:08:05 | INFO  | Task 222c4d4a-bc6d-472e-867f-3f5b90e5bb15 is in state SUCCESS 2025-05-29 01:08:05.281097 | orchestrator | 2025-05-29 01:08:05.281138 | orchestrator | 2025-05-29 01:08:05.281152 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-29 01:08:05.281166 | orchestrator | 2025-05-29 01:08:05.281178 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-29 01:08:05.281190 | orchestrator | Thursday 29 May 2025 01:03:37 +0000 (0:00:00.486) 0:00:00.486 ********** 2025-05-29 01:08:05.281202 | orchestrator | ok: [testbed-manager] 2025-05-29 01:08:05.281215 | orchestrator | ok: [testbed-node-0] 2025-05-29 01:08:05.281228 | orchestrator | ok: [testbed-node-1] 2025-05-29 01:08:05.281240 | orchestrator | ok: [testbed-node-2] 2025-05-29 01:08:05.281251 | orchestrator | ok: [testbed-node-3] 2025-05-29 01:08:05.281263 | orchestrator | ok: [testbed-node-4] 2025-05-29 01:08:05.281274 | orchestrator | ok: [testbed-node-5] 2025-05-29 01:08:05.281286 | orchestrator | 2025-05-29 01:08:05.281298 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-29 01:08:05.281310 | orchestrator | Thursday 29 May 2025 01:03:38 +0000 (0:00:00.927) 0:00:01.413 ********** 2025-05-29 01:08:05.281323 | orchestrator | ok: [testbed-manager] => (item=enable_prometheus_True) 2025-05-29 01:08:05.281335 | orchestrator | ok: [testbed-node-0] => (item=enable_prometheus_True) 2025-05-29 01:08:05.281347 | orchestrator | ok: [testbed-node-1] => (item=enable_prometheus_True) 2025-05-29 01:08:05.281402 | orchestrator | ok: [testbed-node-2] => (item=enable_prometheus_True) 2025-05-29 01:08:05.281415 | orchestrator | ok: [testbed-node-3] => (item=enable_prometheus_True) 2025-05-29 01:08:05.281426 | orchestrator | ok: [testbed-node-4] => (item=enable_prometheus_True) 2025-05-29 01:08:05.281437 | orchestrator | ok: [testbed-node-5] => (item=enable_prometheus_True) 2025-05-29 01:08:05.281448 | orchestrator | 2025-05-29 01:08:05.281459 | orchestrator | PLAY [Apply role prometheus] *************************************************** 2025-05-29 01:08:05.281471 | orchestrator | 2025-05-29 01:08:05.281531 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-05-29 01:08:05.281557 | orchestrator | Thursday 29 May 2025 01:03:39 +0000 (0:00:01.034) 0:00:02.448 ********** 2025-05-29 01:08:05.281570 | orchestrator | included: /ansible/roles/prometheus/tasks/deploy.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-29 01:08:05.281584 | orchestrator | 2025-05-29 01:08:05.281596 | orchestrator | TASK [prometheus : Ensuring config directories exist] ************************** 2025-05-29 01:08:05.281619 | orchestrator | Thursday 29 May 2025 01:03:41 +0000 (0:00:01.704) 0:00:04.152 ********** 2025-05-29 01:08:05.281650 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-29 01:08:05.281698 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-29 01:08:05.281742 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-29 01:08:05.281771 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-29 01:08:05.281787 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-29 01:08:05.281803 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-05-29 01:08:05.281823 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-29 01:08:05.281844 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-29 01:08:05.281867 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-29 01:08:05.281881 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-29 01:08:05.281896 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-29 01:08:05.281910 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-29 01:08:05.281923 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-29 01:08:05.281950 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 01:08:05.281965 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 01:08:05.281979 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-29 01:08:05.282000 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 01:08:05.282062 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 01:08:05.282079 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-29 01:08:05.282093 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 01:08:05.282104 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-29 01:08:05.282132 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 01:08:05.282155 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-29 01:08:05.282180 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 01:08:05.282221 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 01:08:05.282241 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-29 01:08:05.282260 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-29 01:08:05.282282 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-29 01:08:05.282314 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-05-29 01:08:05.282351 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-29 01:08:05.282410 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-29 01:08:05.282433 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-29 01:08:05.282453 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 01:08:05.282489 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-29 01:08:05.282509 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 01:08:05.282529 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-29 01:08:05.282558 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-29 01:08:05.282580 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 01:08:05.282601 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-29 01:08:05.282647 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-29 01:08:05.282671 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-29 01:08:05.282693 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 01:08:05.282723 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-29 01:08:05.282738 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 01:08:05.282757 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 01:08:05.282775 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 01:08:05.282786 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-29 01:08:05.282798 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-29 01:08:05.282809 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-29 01:08:05.282828 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-29 01:08:05.282841 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-29 01:08:05.282867 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-29 01:08:05.282880 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-29 01:08:05.282898 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-29 01:08:05.282910 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-29 01:08:05.282921 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.5,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 01:08:05.282940 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-29 01:08:05.282951 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.15,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 01:08:05.282968 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-29 01:08:05.282980 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-29 01:08:05.283000 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-29 01:08:05.283012 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-29 01:08:05.283029 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-29 01:08:05.283041 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.14,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 01:08:05.283058 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-29 01:08:05.283069 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 01:08:05.283186 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-29 01:08:05.283207 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.13,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 01:08:05.283219 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 01:08:05.283239 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-29 01:08:05.283253 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-29 01:08:05.283272 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 01:08:05.283286 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 01:08:05.283299 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-29 01:08:05.283314 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 01:08:05.283336 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-29 01:08:05.283383 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 01:08:05.283398 | orchestrator | 2025-05-29 01:08:05.283411 | orchestrator | TASK [prometheus : include_tasks] ********************************************** 2025-05-29 01:08:05.283424 | orchestrator | Thursday 29 May 2025 01:03:46 +0000 (0:00:04.888) 0:00:09.040 ********** 2025-05-29 01:08:05.283439 | orchestrator | included: /ansible/roles/prometheus/tasks/copy-certs.yml for testbed-manager, testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-29 01:08:05.283454 | orchestrator | 2025-05-29 01:08:05.283467 | orchestrator | TASK [service-cert-copy : prometheus | Copying over extra CA certificates] ***** 2025-05-29 01:08:05.283480 | orchestrator | Thursday 29 May 2025 01:03:50 +0000 (0:00:03.503) 0:00:12.544 ********** 2025-05-29 01:08:05.283494 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-05-29 01:08:05.283514 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-29 01:08:05.283529 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-29 01:08:05.283542 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-29 01:08:05.283562 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-29 01:08:05.283584 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-29 01:08:05.283597 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-29 01:08:05.283612 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-29 01:08:05.283628 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-29 01:08:05.283640 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-29 01:08:05.283652 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-29 01:08:05.283664 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-29 01:08:05.283692 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-29 01:08:05.283705 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-29 01:08:05.283717 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-29 01:08:05.283728 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-29 01:08:05.283744 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-29 01:08:05.283756 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-29 01:08:05.283768 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-29 01:08:05.283779 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-29 01:08:05.283804 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-05-29 01:08:05.284148 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-29 01:08:05.284170 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-29 01:08:05.284189 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-29 01:08:05.284201 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-29 01:08:05.284213 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-29 01:08:05.284233 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-29 01:08:05.284245 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-29 01:08:05.284262 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-29 01:08:05.284274 | orchestrator | 2025-05-29 01:08:05.284286 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS certificate] *** 2025-05-29 01:08:05.284297 | orchestrator | Thursday 29 May 2025 01:03:56 +0000 (0:00:06.422) 0:00:18.966 ********** 2025-05-29 01:08:05.284309 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-29 01:08:05.284325 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 01:08:05.284337 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 01:08:05.284349 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-29 01:08:05.284429 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 01:08:05.284443 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:08:05.284456 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-29 01:08:05.284475 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-29 01:08:05.284488 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-29 01:08:05.284507 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-29 01:08:05.284577 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 01:08:05.284595 | orchestrator | skipping: [testbed-manager] 2025-05-29 01:08:05.284607 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-29 01:08:05.284619 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 01:08:05.284630 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 01:08:05.284648 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-29 01:08:05.284660 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 01:08:05.284671 | orchestrator | skipping: [testbed-node-2] 2025-05-29 01:08:05.284688 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-29 01:08:05.284700 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 01:08:05.284718 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 01:08:05.284729 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-29 01:08:05.284744 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 01:08:05.284757 | orchestrator | skipping: [testbed-node-1] 2025-05-29 01:08:05.284778 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-29 01:08:05.284822 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-29 01:08:05.284835 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-29 01:08:05.284852 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-29 01:08:05.284872 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-29 01:08:05.284885 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-29 01:08:05.284898 | orchestrator | skipping: [testbed-node-3] 2025-05-29 01:08:05.284909 | orchestrator | skipping: [testbed-node-5] 2025-05-29 01:08:05.284920 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-29 01:08:05.284930 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-29 01:08:05.284946 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-29 01:08:05.284956 | orchestrator | skipping: [testbed-node-4] 2025-05-29 01:08:05.284966 | orchestrator | 2025-05-29 01:08:05.284976 | orchestrator | TASK [service-cert-copy : prometheus | Copying over backend internal TLS key] *** 2025-05-29 01:08:05.284986 | orchestrator | Thursday 29 May 2025 01:03:59 +0000 (0:00:03.032) 0:00:21.999 ********** 2025-05-29 01:08:05.285001 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-29 01:08:05.285018 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-29 01:08:05.285028 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-29 01:08:05.285039 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-29 01:08:05.285050 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 01:08:05.285060 | orchestrator | skipping: [testbed-manager] 2025-05-29 01:08:05.285076 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-29 01:08:05.285086 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 01:08:05.285110 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 01:08:05.285121 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-29 01:08:05.285131 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 01:08:05.285142 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:08:05.285152 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-29 01:08:05.285162 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 01:08:05.285177 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 01:08:05.285187 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-29 01:08:05.285202 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 01:08:05.285212 | orchestrator | skipping: [testbed-node-1] 2025-05-29 01:08:05.285227 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-29 01:08:05.285237 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 01:08:05.285247 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 01:08:05.285258 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-29 01:08:05.285268 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 01:08:05.285278 | orchestrator | skipping: [testbed-node-2] 2025-05-29 01:08:05.285670 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-29 01:08:05.285689 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-29 01:08:05.285711 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-29 01:08:05.285722 | orchestrator | skipping: [testbed-node-3] 2025-05-29 01:08:05.285733 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-29 01:08:05.285743 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-29 01:08:05.285754 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-29 01:08:05.285764 | orchestrator | skipping: [testbed-node-4] 2025-05-29 01:08:05.285774 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}})  2025-05-29 01:08:05.285789 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}})  2025-05-29 01:08:05.285805 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-29 01:08:05.285816 | orchestrator | skipping: [testbed-node-5] 2025-05-29 01:08:05.285825 | orchestrator | 2025-05-29 01:08:05.285835 | orchestrator | TASK [prometheus : Copying over config.json files] ***************************** 2025-05-29 01:08:05.285846 | orchestrator | Thursday 29 May 2025 01:04:04 +0000 (0:00:04.732) 0:00:26.732 ********** 2025-05-29 01:08:05.285860 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-29 01:08:05.285871 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-29 01:08:05.285882 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-29 01:08:05.285892 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-29 01:08:05.285914 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-29 01:08:05.285929 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-05-29 01:08:05.285940 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-29 01:08:05.285950 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-29 01:08:05.285961 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-29 01:08:05.285971 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-29 01:08:05.285992 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 01:08:05.286004 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 01:08:05.286047 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-29 01:08:05.286060 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-29 01:08:05.286070 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 01:08:05.286080 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 01:08:05.286090 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-29 01:08:05.286129 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-29 01:08:05.286169 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-29 01:08:05.286181 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 01:08:05.286237 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 01:08:05.286249 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-29 01:08:05.286260 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 01:08:05.286271 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 01:08:05.286341 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-29 01:08:05.286425 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-29 01:08:05.286448 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-29 01:08:05.286471 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-29 01:08:05.286484 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 01:08:05.286497 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 01:08:05.286510 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-29 01:08:05.286529 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-29 01:08:05.286548 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-29 01:08:05.286566 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-29 01:08:05.286579 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-29 01:08:05.286592 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 01:08:05.286604 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 01:08:05.286622 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-29 01:08:05.286640 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-29 01:08:05.286651 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-29 01:08:05.286666 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-29 01:08:05.286677 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 01:08:05.286687 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 01:08:05.286709 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-05-29 01:08:05.286720 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-29 01:08:05.286735 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 01:08:05.286745 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-29 01:08:05.286755 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.13,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 01:08:05.286766 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-29 01:08:05.286781 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-29 01:08:05.286797 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-29 01:08:05.286812 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-29 01:08:05.286823 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-29 01:08:05.286834 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-29 01:08:05.286850 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-29 01:08:05.286865 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-29 01:08:05.286876 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.14,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 01:08:05.286890 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-29 01:08:05.286901 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.15,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 01:08:05.286911 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-29 01:08:05.286926 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-29 01:08:05.286936 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.5,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 01:08:05.286945 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-29 01:08:05.286958 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-29 01:08:05.286971 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-29 01:08:05.286980 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-29 01:08:05.286993 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 01:08:05.287001 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-29 01:08:05.287009 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 01:08:05.287022 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-29 01:08:05.287031 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 01:08:05.287059 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-29 01:08:05.287068 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 01:08:05.287084 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-29 01:08:05.287092 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 01:08:05.287101 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-29 01:08:05.287122 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 01:08:05.287132 | orchestrator | 2025-05-29 01:08:05.287156 | orchestrator | TASK [prometheus : Find custom prometheus alert rules files] ******************* 2025-05-29 01:08:05.287165 | orchestrator | Thursday 29 May 2025 01:04:13 +0000 (0:00:09.376) 0:00:36.108 ********** 2025-05-29 01:08:05.287174 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-29 01:08:05.287225 | orchestrator | 2025-05-29 01:08:05.287234 | orchestrator | TASK [prometheus : Copying over custom prometheus alert rules files] *********** 2025-05-29 01:08:05.287242 | orchestrator | Thursday 29 May 2025 01:04:14 +0000 (0:00:00.451) 0:00:36.559 ********** 2025-05-29 01:08:05.287251 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3682, 'inode': 1326091, 'dev': 209, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1748477716.8053973, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-29 01:08:05.287267 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3682, 'inode': 1326091, 'dev': 209, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1748477716.8053973, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-29 01:08:05.287282 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3682, 'inode': 1326091, 'dev': 209, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1748477716.8053973, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-29 01:08:05.287291 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3682, 'inode': 1326091, 'dev': 209, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1748477716.8053973, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-29 01:08:05.287299 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19651, 'inode': 1326096, 'dev': 209, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1748477716.8083973, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-29 01:08:05.287308 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19651, 'inode': 1326096, 'dev': 209, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1748477716.8083973, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-29 01:08:05.287323 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3682, 'inode': 1326091, 'dev': 209, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1748477716.8053973, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-29 01:08:05.287332 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3682, 'inode': 1326091, 'dev': 209, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1748477716.8053973, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-29 01:08:05.287345 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/cadvisor.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3682, 'inode': 1326091, 'dev': 209, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1748477716.8053973, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-29 01:08:05.287375 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 11895, 'inode': 1326092, 'dev': 209, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1748477716.8063974, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-29 01:08:05.287385 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19651, 'inode': 1326096, 'dev': 209, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1748477716.8083973, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-29 01:08:05.287393 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19651, 'inode': 1326096, 'dev': 209, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1748477716.8083973, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-29 01:08:05.287401 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 11895, 'inode': 1326092, 'dev': 209, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1748477716.8063974, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-29 01:08:05.287415 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19651, 'inode': 1326096, 'dev': 209, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1748477716.8083973, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-29 01:08:05.287424 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19651, 'inode': 1326096, 'dev': 209, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1748477716.8083973, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-29 01:08:05.287436 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1326095, 'dev': 209, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748477716.8083973, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-29 01:08:05.287450 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 11895, 'inode': 1326092, 'dev': 209, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1748477716.8063974, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-29 01:08:05.287459 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 11895, 'inode': 1326092, 'dev': 209, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1748477716.8063974, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-29 01:08:05.287467 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1326095, 'dev': 209, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748477716.8083973, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-29 01:08:05.287475 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 11895, 'inode': 1326092, 'dev': 209, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1748477716.8063974, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-29 01:08:05.287839 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 11895, 'inode': 1326092, 'dev': 209, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1748477716.8063974, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-29 01:08:05.287855 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1326119, 'dev': 209, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748477716.8133974, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-29 01:08:05.287869 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1326095, 'dev': 209, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748477716.8083973, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-29 01:08:05.287885 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1326095, 'dev': 209, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748477716.8083973, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-29 01:08:05.287893 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1326095, 'dev': 209, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748477716.8083973, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-29 01:08:05.287902 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1326119, 'dev': 209, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748477716.8133974, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-29 01:08:05.287910 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/hardware.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19651, 'inode': 1326096, 'dev': 209, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1748477716.8083973, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-29 01:08:05.287924 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1326095, 'dev': 209, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748477716.8083973, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-29 01:08:05.287932 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1326099, 'dev': 209, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748477716.8103974, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-29 01:08:05.287950 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1326119, 'dev': 209, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748477716.8133974, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-29 01:08:05.287959 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1326119, 'dev': 209, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748477716.8133974, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-29 01:08:05.287967 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1326099, 'dev': 209, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748477716.8103974, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-29 01:08:05.287976 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1326119, 'dev': 209, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748477716.8133974, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-29 01:08:05.287984 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1326094, 'dev': 209, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748477716.8073974, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-29 01:08:05.287997 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1326119, 'dev': 209, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748477716.8133974, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-29 01:08:05.288006 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1326099, 'dev': 209, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748477716.8103974, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-29 01:08:05.288022 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1326099, 'dev': 209, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748477716.8103974, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-29 01:08:05.288032 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1326094, 'dev': 209, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748477716.8073974, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-29 01:08:05.288040 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1326099, 'dev': 209, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748477716.8103974, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-29 01:08:05.288048 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1326099, 'dev': 209, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748477716.8103974, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-29 01:08:05.288057 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1326094, 'dev': 209, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748477716.8073974, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-29 01:08:05.288069 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1326098, 'dev': 209, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748477716.8093975, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-29 01:08:05.288077 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1326094, 'dev': 209, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748477716.8073974, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-29 01:08:05.288094 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1326098, 'dev': 209, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748477716.8093975, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-29 01:08:05.288103 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1326094, 'dev': 209, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748477716.8073974, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-29 01:08:05.288111 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1326094, 'dev': 209, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748477716.8073974, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-29 01:08:05.288120 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/ceph.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 11895, 'inode': 1326092, 'dev': 209, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1748477716.8063974, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-29 01:08:05.288128 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1326098, 'dev': 209, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748477716.8093975, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-29 01:08:05.288140 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1326118, 'dev': 209, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748477716.8133974, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-29 01:08:05.288156 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1326098, 'dev': 209, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748477716.8093975, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-29 01:08:05.288169 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1326098, 'dev': 209, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748477716.8093975, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-29 01:08:05.288178 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1326118, 'dev': 209, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748477716.8133974, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-29 01:08:05.288186 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1326098, 'dev': 209, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748477716.8093975, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-29 01:08:05.288194 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1326118, 'dev': 209, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748477716.8133974, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-29 01:08:05.288202 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1326118, 'dev': 209, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748477716.8133974, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-29 01:08:05.288215 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1326093, 'dev': 209, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748477716.8073974, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-29 01:08:05.288228 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1326118, 'dev': 209, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748477716.8133974, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-29 01:08:05.288240 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1326093, 'dev': 209, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748477716.8073974, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-29 01:08:05.288248 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1326118, 'dev': 209, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748477716.8133974, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-29 01:08:05.288257 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1326093, 'dev': 209, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748477716.8073974, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-29 01:08:05.288265 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/haproxy.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 7933, 'inode': 1326095, 'dev': 209, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748477716.8083973, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-29 01:08:05.288274 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1326093, 'dev': 209, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748477716.8073974, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-29 01:08:05.288287 | orchestrator | skipping: [testbed-node-1] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12018, 'inode': 1326103, 'dev': 209, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1748477716.8103974, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-29 01:08:05.288301 | orchestrator | skipping: [testbed-node-1] 2025-05-29 01:08:05.288310 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1326093, 'dev': 209, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748477716.8073974, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-29 01:08:05.288384 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1326093, 'dev': 209, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748477716.8073974, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-29 01:08:05.288394 | orchestrator | skipping: [testbed-node-0] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12018, 'inode': 1326103, 'dev': 209, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1748477716.8103974, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-29 01:08:05.288403 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:08:05.288411 | orchestrator | skipping: [testbed-node-3] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12018, 'inode': 1326103, 'dev': 209, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1748477716.8103974, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-29 01:08:05.288419 | orchestrator | skipping: [testbed-node-3] 2025-05-29 01:08:05.288428 | orchestrator | skipping: [testbed-node-2] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12018, 'inode': 1326103, 'dev': 209, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1748477716.8103974, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-29 01:08:05.288436 | orchestrator | skipping: [testbed-node-2] 2025-05-29 01:08:05.288444 | orchestrator | skipping: [testbed-node-4] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12018, 'inode': 1326103, 'dev': 209, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1748477716.8103974, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-29 01:08:05.288458 | orchestrator | skipping: [testbed-node-4] 2025-05-29 01:08:05.288475 | orchestrator | skipping: [testbed-node-5] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12018, 'inode': 1326103, 'dev': 209, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1748477716.8103974, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False})  2025-05-29 01:08:05.288485 | orchestrator | skipping: [testbed-node-5] 2025-05-29 01:08:05.288496 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/redfish.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 334, 'inode': 1326119, 'dev': 209, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748477716.8133974, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-29 01:08:05.288510 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/openstack.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12293, 'inode': 1326099, 'dev': 209, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748477716.8103974, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-29 01:08:05.288521 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/fluentd-aggregator.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 996, 'inode': 1326094, 'dev': 209, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748477716.8073974, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-29 01:08:05.288531 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/mysql.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3792, 'inode': 1326098, 'dev': 209, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748477716.8093975, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-29 01:08:05.288541 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/rabbitmq.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 3539, 'inode': 1326118, 'dev': 209, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748477716.8133974, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-29 01:08:05.288551 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/elasticsearch.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 5987, 'inode': 1326093, 'dev': 209, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748477716.8073974, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-29 01:08:05.288570 | orchestrator | changed: [testbed-manager] => (item={'path': '/operations/prometheus/prometheus.rules', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12018, 'inode': 1326103, 'dev': 209, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1748477716.8103974, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}) 2025-05-29 01:08:05.288580 | orchestrator | 2025-05-29 01:08:05.288590 | orchestrator | TASK [prometheus : Find prometheus common config overrides] ******************** 2025-05-29 01:08:05.288601 | orchestrator | Thursday 29 May 2025 01:04:50 +0000 (0:00:36.180) 0:01:12.740 ********** 2025-05-29 01:08:05.288610 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-29 01:08:05.288620 | orchestrator | 2025-05-29 01:08:05.288630 | orchestrator | TASK [prometheus : Find prometheus host config overrides] ********************** 2025-05-29 01:08:05.288639 | orchestrator | Thursday 29 May 2025 01:04:50 +0000 (0:00:00.409) 0:01:13.149 ********** 2025-05-29 01:08:05.288649 | orchestrator | [WARNING]: Skipped 2025-05-29 01:08:05.288659 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-29 01:08:05.288669 | orchestrator | manager/prometheus.yml.d' path due to this access issue: 2025-05-29 01:08:05.288679 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-29 01:08:05.288689 | orchestrator | manager/prometheus.yml.d' is not a directory 2025-05-29 01:08:05.288699 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-29 01:08:05.288958 | orchestrator | [WARNING]: Skipped 2025-05-29 01:08:05.288970 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-29 01:08:05.288983 | orchestrator | node-0/prometheus.yml.d' path due to this access issue: 2025-05-29 01:08:05.288992 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-29 01:08:05.289000 | orchestrator | node-0/prometheus.yml.d' is not a directory 2025-05-29 01:08:05.289008 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-29 01:08:05.289016 | orchestrator | [WARNING]: Skipped 2025-05-29 01:08:05.289024 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-29 01:08:05.289032 | orchestrator | node-1/prometheus.yml.d' path due to this access issue: 2025-05-29 01:08:05.289040 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-29 01:08:05.289048 | orchestrator | node-1/prometheus.yml.d' is not a directory 2025-05-29 01:08:05.289056 | orchestrator | [WARNING]: Skipped 2025-05-29 01:08:05.289064 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-29 01:08:05.289073 | orchestrator | node-2/prometheus.yml.d' path due to this access issue: 2025-05-29 01:08:05.289080 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-29 01:08:05.289088 | orchestrator | node-2/prometheus.yml.d' is not a directory 2025-05-29 01:08:05.289096 | orchestrator | [WARNING]: Skipped 2025-05-29 01:08:05.289104 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-29 01:08:05.289112 | orchestrator | node-3/prometheus.yml.d' path due to this access issue: 2025-05-29 01:08:05.289120 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-29 01:08:05.289135 | orchestrator | node-3/prometheus.yml.d' is not a directory 2025-05-29 01:08:05.289144 | orchestrator | [WARNING]: Skipped 2025-05-29 01:08:05.289152 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-29 01:08:05.289160 | orchestrator | node-4/prometheus.yml.d' path due to this access issue: 2025-05-29 01:08:05.289168 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-29 01:08:05.289176 | orchestrator | node-4/prometheus.yml.d' is not a directory 2025-05-29 01:08:05.289184 | orchestrator | [WARNING]: Skipped 2025-05-29 01:08:05.289192 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-29 01:08:05.289200 | orchestrator | node-5/prometheus.yml.d' path due to this access issue: 2025-05-29 01:08:05.289208 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/testbed- 2025-05-29 01:08:05.289216 | orchestrator | node-5/prometheus.yml.d' is not a directory 2025-05-29 01:08:05.289225 | orchestrator | ok: [testbed-node-1 -> localhost] 2025-05-29 01:08:05.289233 | orchestrator | ok: [testbed-node-2 -> localhost] 2025-05-29 01:08:05.289241 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-05-29 01:08:05.289249 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-05-29 01:08:05.289257 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-05-29 01:08:05.289265 | orchestrator | 2025-05-29 01:08:05.289273 | orchestrator | TASK [prometheus : Copying over prometheus config file] ************************ 2025-05-29 01:08:05.289282 | orchestrator | Thursday 29 May 2025 01:04:52 +0000 (0:00:01.528) 0:01:14.678 ********** 2025-05-29 01:08:05.289290 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-05-29 01:08:05.289298 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:08:05.289307 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-05-29 01:08:05.289314 | orchestrator | skipping: [testbed-node-3] 2025-05-29 01:08:05.289323 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-05-29 01:08:05.289331 | orchestrator | skipping: [testbed-node-1] 2025-05-29 01:08:05.289339 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-05-29 01:08:05.289347 | orchestrator | skipping: [testbed-node-2] 2025-05-29 01:08:05.289355 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-05-29 01:08:05.289468 | orchestrator | skipping: [testbed-node-5] 2025-05-29 01:08:05.289485 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2)  2025-05-29 01:08:05.289500 | orchestrator | skipping: [testbed-node-4] 2025-05-29 01:08:05.289513 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus.yml.j2) 2025-05-29 01:08:05.289522 | orchestrator | 2025-05-29 01:08:05.289530 | orchestrator | TASK [prometheus : Copying over prometheus web config file] ******************** 2025-05-29 01:08:05.289538 | orchestrator | Thursday 29 May 2025 01:05:08 +0000 (0:00:16.774) 0:01:31.453 ********** 2025-05-29 01:08:05.289546 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-05-29 01:08:05.289554 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:08:05.289563 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-05-29 01:08:05.289571 | orchestrator | skipping: [testbed-node-1] 2025-05-29 01:08:05.289581 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-05-29 01:08:05.289590 | orchestrator | skipping: [testbed-node-4] 2025-05-29 01:08:05.289600 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-05-29 01:08:05.289609 | orchestrator | skipping: [testbed-node-2] 2025-05-29 01:08:05.289619 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-05-29 01:08:05.289636 | orchestrator | skipping: [testbed-node-3] 2025-05-29 01:08:05.289651 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2)  2025-05-29 01:08:05.289659 | orchestrator | skipping: [testbed-node-5] 2025-05-29 01:08:05.289668 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-web.yml.j2) 2025-05-29 01:08:05.289676 | orchestrator | 2025-05-29 01:08:05.289684 | orchestrator | TASK [prometheus : Copying over prometheus alertmanager config file] *********** 2025-05-29 01:08:05.289692 | orchestrator | Thursday 29 May 2025 01:05:13 +0000 (0:00:04.224) 0:01:35.678 ********** 2025-05-29 01:08:05.289700 | orchestrator | skipping: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-05-29 01:08:05.289709 | orchestrator | skipping: [testbed-node-1] 2025-05-29 01:08:05.289718 | orchestrator | skipping: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-05-29 01:08:05.289726 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:08:05.289734 | orchestrator | skipping: [testbed-node-3] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-05-29 01:08:05.289742 | orchestrator | skipping: [testbed-node-3] 2025-05-29 01:08:05.289750 | orchestrator | skipping: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-05-29 01:08:05.289758 | orchestrator | skipping: [testbed-node-2] 2025-05-29 01:08:05.289766 | orchestrator | skipping: [testbed-node-4] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-05-29 01:08:05.289773 | orchestrator | skipping: [testbed-node-4] 2025-05-29 01:08:05.289780 | orchestrator | skipping: [testbed-node-5] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml)  2025-05-29 01:08:05.289786 | orchestrator | skipping: [testbed-node-5] 2025-05-29 01:08:05.289794 | orchestrator | changed: [testbed-manager] => (item=/opt/configuration/environments/kolla/files/overlays/prometheus/prometheus-alertmanager.yml) 2025-05-29 01:08:05.289800 | orchestrator | 2025-05-29 01:08:05.289807 | orchestrator | TASK [prometheus : Find custom Alertmanager alert notification templates] ****** 2025-05-29 01:08:05.289814 | orchestrator | Thursday 29 May 2025 01:05:17 +0000 (0:00:04.686) 0:01:40.365 ********** 2025-05-29 01:08:05.289821 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-29 01:08:05.289828 | orchestrator | 2025-05-29 01:08:05.289835 | orchestrator | TASK [prometheus : Copying over custom Alertmanager alert notification templates] *** 2025-05-29 01:08:05.289842 | orchestrator | Thursday 29 May 2025 01:05:18 +0000 (0:00:00.364) 0:01:40.729 ********** 2025-05-29 01:08:05.289849 | orchestrator | skipping: [testbed-manager] 2025-05-29 01:08:05.289856 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:08:05.289863 | orchestrator | skipping: [testbed-node-1] 2025-05-29 01:08:05.289869 | orchestrator | skipping: [testbed-node-2] 2025-05-29 01:08:05.289876 | orchestrator | skipping: [testbed-node-3] 2025-05-29 01:08:05.289883 | orchestrator | skipping: [testbed-node-4] 2025-05-29 01:08:05.289890 | orchestrator | skipping: [testbed-node-5] 2025-05-29 01:08:05.289897 | orchestrator | 2025-05-29 01:08:05.289903 | orchestrator | TASK [prometheus : Copying over my.cnf for mysqld_exporter] ******************** 2025-05-29 01:08:05.289910 | orchestrator | Thursday 29 May 2025 01:05:18 +0000 (0:00:00.756) 0:01:41.485 ********** 2025-05-29 01:08:05.289917 | orchestrator | skipping: [testbed-manager] 2025-05-29 01:08:05.289924 | orchestrator | skipping: [testbed-node-3] 2025-05-29 01:08:05.289931 | orchestrator | skipping: [testbed-node-5] 2025-05-29 01:08:05.289938 | orchestrator | skipping: [testbed-node-4] 2025-05-29 01:08:05.289944 | orchestrator | changed: [testbed-node-0] 2025-05-29 01:08:05.289951 | orchestrator | changed: [testbed-node-1] 2025-05-29 01:08:05.289958 | orchestrator | changed: [testbed-node-2] 2025-05-29 01:08:05.289969 | orchestrator | 2025-05-29 01:08:05.289976 | orchestrator | TASK [prometheus : Copying cloud config file for openstack exporter] *********** 2025-05-29 01:08:05.289984 | orchestrator | Thursday 29 May 2025 01:05:22 +0000 (0:00:03.220) 0:01:44.706 ********** 2025-05-29 01:08:05.289995 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-05-29 01:08:05.290002 | orchestrator | skipping: [testbed-node-1] 2025-05-29 01:08:05.290009 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-05-29 01:08:05.290038 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:08:05.290047 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-05-29 01:08:05.290054 | orchestrator | skipping: [testbed-node-2] 2025-05-29 01:08:05.290060 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-05-29 01:08:05.290067 | orchestrator | skipping: [testbed-node-3] 2025-05-29 01:08:05.290074 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-05-29 01:08:05.290081 | orchestrator | skipping: [testbed-node-5] 2025-05-29 01:08:05.290088 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-05-29 01:08:05.290094 | orchestrator | skipping: [testbed-node-4] 2025-05-29 01:08:05.290101 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/clouds.yml.j2)  2025-05-29 01:08:05.290108 | orchestrator | skipping: [testbed-manager] 2025-05-29 01:08:05.290114 | orchestrator | 2025-05-29 01:08:05.290121 | orchestrator | TASK [prometheus : Copying config file for blackbox exporter] ****************** 2025-05-29 01:08:05.290128 | orchestrator | Thursday 29 May 2025 01:05:25 +0000 (0:00:03.710) 0:01:48.417 ********** 2025-05-29 01:08:05.290135 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-05-29 01:08:05.290146 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:08:05.290153 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-05-29 01:08:05.290160 | orchestrator | skipping: [testbed-node-1] 2025-05-29 01:08:05.290166 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-05-29 01:08:05.290173 | orchestrator | skipping: [testbed-node-2] 2025-05-29 01:08:05.290180 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-05-29 01:08:05.290186 | orchestrator | skipping: [testbed-node-3] 2025-05-29 01:08:05.290193 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-05-29 01:08:05.290200 | orchestrator | skipping: [testbed-node-4] 2025-05-29 01:08:05.290206 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2)  2025-05-29 01:08:05.290213 | orchestrator | skipping: [testbed-node-5] 2025-05-29 01:08:05.290219 | orchestrator | changed: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-blackbox-exporter.yml.j2) 2025-05-29 01:08:05.290226 | orchestrator | 2025-05-29 01:08:05.290233 | orchestrator | TASK [prometheus : Find extra prometheus server config files] ****************** 2025-05-29 01:08:05.290240 | orchestrator | Thursday 29 May 2025 01:05:30 +0000 (0:00:04.564) 0:01:52.982 ********** 2025-05-29 01:08:05.290246 | orchestrator | [WARNING]: Skipped 2025-05-29 01:08:05.290253 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' path 2025-05-29 01:08:05.290260 | orchestrator | due to this access issue: 2025-05-29 01:08:05.290267 | orchestrator | '/opt/configuration/environments/kolla/files/overlays/prometheus/extras/' is 2025-05-29 01:08:05.290273 | orchestrator | not a directory 2025-05-29 01:08:05.290280 | orchestrator | ok: [testbed-manager -> localhost] 2025-05-29 01:08:05.290287 | orchestrator | 2025-05-29 01:08:05.290302 | orchestrator | TASK [prometheus : Create subdirectories for extra config files] *************** 2025-05-29 01:08:05.290309 | orchestrator | Thursday 29 May 2025 01:05:32 +0000 (0:00:01.880) 0:01:54.862 ********** 2025-05-29 01:08:05.290316 | orchestrator | skipping: [testbed-manager] 2025-05-29 01:08:05.290322 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:08:05.290329 | orchestrator | skipping: [testbed-node-1] 2025-05-29 01:08:05.290336 | orchestrator | skipping: [testbed-node-2] 2025-05-29 01:08:05.290342 | orchestrator | skipping: [testbed-node-3] 2025-05-29 01:08:05.290349 | orchestrator | skipping: [testbed-node-4] 2025-05-29 01:08:05.290356 | orchestrator | skipping: [testbed-node-5] 2025-05-29 01:08:05.290378 | orchestrator | 2025-05-29 01:08:05.290385 | orchestrator | TASK [prometheus : Template extra prometheus server config files] ************** 2025-05-29 01:08:05.290391 | orchestrator | Thursday 29 May 2025 01:05:33 +0000 (0:00:00.962) 0:01:55.824 ********** 2025-05-29 01:08:05.290398 | orchestrator | skipping: [testbed-manager] 2025-05-29 01:08:05.290405 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:08:05.290412 | orchestrator | skipping: [testbed-node-1] 2025-05-29 01:08:05.290419 | orchestrator | skipping: [testbed-node-2] 2025-05-29 01:08:05.290426 | orchestrator | skipping: [testbed-node-3] 2025-05-29 01:08:05.290432 | orchestrator | skipping: [testbed-node-4] 2025-05-29 01:08:05.290439 | orchestrator | skipping: [testbed-node-5] 2025-05-29 01:08:05.290446 | orchestrator | 2025-05-29 01:08:05.290453 | orchestrator | TASK [prometheus : Copying over prometheus msteams config file] **************** 2025-05-29 01:08:05.290460 | orchestrator | Thursday 29 May 2025 01:05:34 +0000 (0:00:00.799) 0:01:56.624 ********** 2025-05-29 01:08:05.290467 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.yml.j2)  2025-05-29 01:08:05.290474 | orchestrator | skipping: [testbed-node-1] 2025-05-29 01:08:05.290480 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.yml.j2)  2025-05-29 01:08:05.290487 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:08:05.290494 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.yml.j2)  2025-05-29 01:08:05.290501 | orchestrator | skipping: [testbed-node-4] 2025-05-29 01:08:05.290513 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.yml.j2)  2025-05-29 01:08:05.290520 | orchestrator | skipping: [testbed-node-2] 2025-05-29 01:08:05.290527 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.yml.j2)  2025-05-29 01:08:05.290534 | orchestrator | skipping: [testbed-node-5] 2025-05-29 01:08:05.290541 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.yml.j2)  2025-05-29 01:08:05.290548 | orchestrator | skipping: [testbed-node-3] 2025-05-29 01:08:05.290555 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.yml.j2)  2025-05-29 01:08:05.290561 | orchestrator | skipping: [testbed-manager] 2025-05-29 01:08:05.290568 | orchestrator | 2025-05-29 01:08:05.290575 | orchestrator | TASK [prometheus : Copying over prometheus msteams template file] ************** 2025-05-29 01:08:05.290582 | orchestrator | Thursday 29 May 2025 01:05:37 +0000 (0:00:03.481) 0:02:00.105 ********** 2025-05-29 01:08:05.290589 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.tmpl)  2025-05-29 01:08:05.290596 | orchestrator | skipping: [testbed-node-1] 2025-05-29 01:08:05.290603 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.tmpl)  2025-05-29 01:08:05.290609 | orchestrator | skipping: [testbed-node-4] 2025-05-29 01:08:05.290616 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.tmpl)  2025-05-29 01:08:05.290623 | orchestrator | skipping: [testbed-node-2] 2025-05-29 01:08:05.290634 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.tmpl)  2025-05-29 01:08:05.290645 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:08:05.290652 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.tmpl)  2025-05-29 01:08:05.290659 | orchestrator | skipping: [testbed-node-5] 2025-05-29 01:08:05.290666 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.tmpl)  2025-05-29 01:08:05.290672 | orchestrator | skipping: [testbed-node-3] 2025-05-29 01:08:05.290679 | orchestrator | skipping: [testbed-manager] => (item=/ansible/roles/prometheus/templates/prometheus-msteams.tmpl)  2025-05-29 01:08:05.290686 | orchestrator | skipping: [testbed-manager] 2025-05-29 01:08:05.290693 | orchestrator | 2025-05-29 01:08:05.290699 | orchestrator | TASK [prometheus : Check prometheus containers] ******************************** 2025-05-29 01:08:05.290706 | orchestrator | Thursday 29 May 2025 01:05:41 +0000 (0:00:03.809) 0:02:03.915 ********** 2025-05-29 01:08:05.290714 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-29 01:08:05.290723 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-29 01:08:05.290734 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-29 01:08:05.290742 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-29 01:08:05.290758 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}}) 2025-05-29 01:08:05.290765 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-29 01:08:05.290773 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-29 01:08:05.290781 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-29 01:08:05.290793 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-server', 'value': {'container_name': 'prometheus_server', 'group': 'prometheus', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-v2-server:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-server/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'prometheus_v2:/var/lib/prometheus', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_server': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9091', 'active_passive': True}, 'prometheus_server_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9091', 'listen_port': '9091', 'active_passive': True}}}})  2025-05-29 01:08:05.290800 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-29 01:08:05.290816 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-29 01:08:05.290823 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 01:08:05.290831 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 01:08:05.290839 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-29 01:08:05.290846 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 01:08:05.290853 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 01:08:05.290864 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-29 01:08:05.290872 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-29 01:08:05.290887 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-29 01:08:05.290895 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 01:08:05.290902 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 01:08:05.290910 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-node-exporter', 'value': {'container_name': 'prometheus_node_exporter', 'group': 'prometheus-node-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-node-exporter:1.7.0.20241206', 'pid_mode': 'host', 'volumes': ['/etc/kolla/prometheus-node-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/host:ro,rslave'], 'dimensions': {}}}) 2025-05-29 01:08:05.290917 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 01:08:05.290924 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 01:08:05.290935 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-mysqld-exporter', 'value': {'container_name': 'prometheus_mysqld_exporter', 'group': 'prometheus-mysqld-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-mysqld-exporter:0.15.1.20241206', 'volumes': ['/etc/kolla/prometheus-mysqld-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-29 01:08:05.290947 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-29 01:08:05.290958 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-29 01:08:05.290967 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-29 01:08:05.290975 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-29 01:08:05.290983 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-29 01:08:05.290993 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 01:08:05.291007 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-29 01:08:05.291018 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-29 01:08:05.291025 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 01:08:05.291033 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-29 01:08:05.291040 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-29 01:08:05.291052 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 01:08:05.291063 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 01:08:05.291074 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-29 01:08:05.291082 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-29 01:08:05.291089 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-29 01:08:05.291096 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 01:08:05.291107 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 01:08:05.291120 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-memcached-exporter', 'value': {'container_name': 'prometheus_memcached_exporter', 'group': 'prometheus-memcached-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-memcached-exporter:0.14.2.20241206', 'volumes': ['/etc/kolla/prometheus-memcached-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-29 01:08:05.291131 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}}) 2025-05-29 01:08:05.291139 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-29 01:08:05.291146 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 01:08:05.291153 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-29 01:08:05.291170 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-29 01:08:05.291181 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-29 01:08:05.291189 | orchestrator | changed: [testbed-node-4] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-29 01:08:05.291196 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-29 01:08:05.291204 | orchestrator | changed: [testbed-node-3] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-29 01:08:05.291211 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.14,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 01:08:05.291227 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-29 01:08:05.291265 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-29 01:08:05.291274 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.13,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 01:08:05.291281 | orchestrator | changed: [testbed-node-5] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}}) 2025-05-29 01:08:05.291288 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.15,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 01:08:05.291295 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-cadvisor', 'value': {'container_name': 'prometheus_cadvisor', 'group': 'prometheus-cadvisor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-cadvisor:0.49.1.20241206', 'volumes': ['/etc/kolla/prometheus-cadvisor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', '/:/rootfs:ro', '/var/run:/var/run:rw', '/sys:/sys:ro', '/var/lib/docker/:/var/lib/docker:ro', '/dev/disk/:/dev/disk:ro'], 'dimensions': {}}}) 2025-05-29 01:08:05.291311 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-alertmanager', 'value': {'container_name': 'prometheus_alertmanager', 'group': 'prometheus-alertmanager', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-alertmanager:0.27.0.20241206', 'volumes': ['/etc/kolla/prometheus-alertmanager/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', 'prometheus:/var/lib/prometheus'], 'dimensions': {}, 'haproxy': {'prometheus_alertmanager': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}, 'prometheus_alertmanager_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9093', 'listen_port': '9093', 'auth_user': 'admin', 'auth_pass': 'BXo64rLqmF7bTbWLDOnNJlD0qJ4BSTWocNHVNKU2', 'active_passive': True}}}})  2025-05-29 01:08:05.291322 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-openstack-exporter', 'value': {'container_name': 'prometheus_openstack_exporter', 'group': 'prometheus-openstack-exporter', 'enabled': False, 'environment': {'OS_COMPUTE_API_VERSION': 'latest'}, 'image': 'registry.osism.tech/kolla/release/prometheus-openstack-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-openstack-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'prometheus_openstack_exporter': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}, 'prometheus_openstack_exporter_external': {'enabled': False, 'mode': 'http', 'external': True, 'port': '9198', 'backend_http_extra': ['timeout server 45s']}}}})  2025-05-29 01:08:05.291330 | orchestrator | changed: [testbed-manager] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-29 01:08:05.291337 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-29 01:08:05.291344 | orchestrator | skipping: [testbed-manager] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.5,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 01:08:05.291351 | orchestrator | changed: [testbed-node-1] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-29 01:08:05.291376 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 01:08:05.291388 | orchestrator | changed: [testbed-node-0] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-29 01:08:05.291396 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-29 01:08:05.291406 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 01:08:05.291414 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 01:08:05.291421 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-29 01:08:05.291428 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 01:08:05.291440 | orchestrator | changed: [testbed-node-2] => (item={'key': 'prometheus-elasticsearch-exporter', 'value': {'container_name': 'prometheus_elasticsearch_exporter', 'group': 'prometheus-elasticsearch-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-elasticsearch-exporter:1.7.0.20241206', 'volumes': ['/etc/kolla/prometheus-elasticsearch-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}}) 2025-05-29 01:08:05.291451 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-blackbox-exporter', 'value': {'container_name': 'prometheus_blackbox_exporter', 'group': 'prometheus-blackbox-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-blackbox-exporter:0.24.0.20241206', 'volumes': ['/etc/kolla/prometheus-blackbox-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 01:08:05.291458 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-libvirt-exporter', 'value': {'container_name': 'prometheus_libvirt_exporter', 'group': 'prometheus-libvirt-exporter', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/prometheus-libvirt-exporter:8.1.0.20241206', 'volumes': ['/etc/kolla/prometheus-libvirt-exporter/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/run/libvirt:/run/libvirt:ro'], 'dimensions': {}}})  2025-05-29 01:08:05.291469 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'prometheus-msteams', 'value': {'container_name': 'prometheus_msteams', 'group': 'prometheus-msteams', 'enabled': False, 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'image': 'registry.osism.tech/dockerhub/kolla/release/prometheus-msteams:2.50.1.20241206', 'volumes': ['/etc/kolla/prometheus-msteams/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}}})  2025-05-29 01:08:05.291476 | orchestrator | 2025-05-29 01:08:05.291483 | orchestrator | TASK [prometheus : Creating prometheus database user and setting permissions] *** 2025-05-29 01:08:05.291490 | orchestrator | Thursday 29 May 2025 01:05:46 +0000 (0:00:05.421) 0:02:09.337 ********** 2025-05-29 01:08:05.291497 | orchestrator | changed: [testbed-manager] => (item=testbed-node-0) 2025-05-29 01:08:05.291504 | orchestrator | 2025-05-29 01:08:05.291511 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-05-29 01:08:05.291517 | orchestrator | Thursday 29 May 2025 01:05:49 +0000 (0:00:02.756) 0:02:12.094 ********** 2025-05-29 01:08:05.291524 | orchestrator | 2025-05-29 01:08:05.291531 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-05-29 01:08:05.291538 | orchestrator | Thursday 29 May 2025 01:05:49 +0000 (0:00:00.053) 0:02:12.147 ********** 2025-05-29 01:08:05.291545 | orchestrator | 2025-05-29 01:08:05.291552 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-05-29 01:08:05.291558 | orchestrator | Thursday 29 May 2025 01:05:49 +0000 (0:00:00.198) 0:02:12.346 ********** 2025-05-29 01:08:05.291565 | orchestrator | 2025-05-29 01:08:05.291572 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-05-29 01:08:05.291578 | orchestrator | Thursday 29 May 2025 01:05:49 +0000 (0:00:00.048) 0:02:12.395 ********** 2025-05-29 01:08:05.291585 | orchestrator | 2025-05-29 01:08:05.291592 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-05-29 01:08:05.291598 | orchestrator | Thursday 29 May 2025 01:05:49 +0000 (0:00:00.049) 0:02:12.444 ********** 2025-05-29 01:08:05.291605 | orchestrator | 2025-05-29 01:08:05.291617 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-05-29 01:08:05.291624 | orchestrator | Thursday 29 May 2025 01:05:49 +0000 (0:00:00.049) 0:02:12.494 ********** 2025-05-29 01:08:05.291631 | orchestrator | 2025-05-29 01:08:05.291637 | orchestrator | TASK [prometheus : Flush handlers] ********************************************* 2025-05-29 01:08:05.291644 | orchestrator | Thursday 29 May 2025 01:05:50 +0000 (0:00:00.219) 0:02:12.714 ********** 2025-05-29 01:08:05.291651 | orchestrator | 2025-05-29 01:08:05.291658 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-server container] ************* 2025-05-29 01:08:05.291665 | orchestrator | Thursday 29 May 2025 01:05:50 +0000 (0:00:00.071) 0:02:12.785 ********** 2025-05-29 01:08:05.291671 | orchestrator | changed: [testbed-manager] 2025-05-29 01:08:05.291678 | orchestrator | 2025-05-29 01:08:05.291685 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-node-exporter container] ****** 2025-05-29 01:08:05.291692 | orchestrator | Thursday 29 May 2025 01:06:07 +0000 (0:00:16.784) 0:02:29.570 ********** 2025-05-29 01:08:05.291699 | orchestrator | changed: [testbed-node-0] 2025-05-29 01:08:05.291705 | orchestrator | changed: [testbed-node-2] 2025-05-29 01:08:05.291712 | orchestrator | changed: [testbed-node-4] 2025-05-29 01:08:05.291719 | orchestrator | changed: [testbed-manager] 2025-05-29 01:08:05.291726 | orchestrator | changed: [testbed-node-1] 2025-05-29 01:08:05.291733 | orchestrator | changed: [testbed-node-3] 2025-05-29 01:08:05.291739 | orchestrator | changed: [testbed-node-5] 2025-05-29 01:08:05.291746 | orchestrator | 2025-05-29 01:08:05.291753 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-mysqld-exporter container] **** 2025-05-29 01:08:05.291760 | orchestrator | Thursday 29 May 2025 01:06:27 +0000 (0:00:20.715) 0:02:50.285 ********** 2025-05-29 01:08:05.291767 | orchestrator | changed: [testbed-node-2] 2025-05-29 01:08:05.291773 | orchestrator | changed: [testbed-node-0] 2025-05-29 01:08:05.291780 | orchestrator | changed: [testbed-node-1] 2025-05-29 01:08:05.291786 | orchestrator | 2025-05-29 01:08:05.291793 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-memcached-exporter container] *** 2025-05-29 01:08:05.291800 | orchestrator | Thursday 29 May 2025 01:06:40 +0000 (0:00:12.528) 0:03:02.814 ********** 2025-05-29 01:08:05.291807 | orchestrator | changed: [testbed-node-1] 2025-05-29 01:08:05.291814 | orchestrator | changed: [testbed-node-2] 2025-05-29 01:08:05.291820 | orchestrator | changed: [testbed-node-0] 2025-05-29 01:08:05.291827 | orchestrator | 2025-05-29 01:08:05.291834 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-cadvisor container] *********** 2025-05-29 01:08:05.291841 | orchestrator | Thursday 29 May 2025 01:06:56 +0000 (0:00:16.314) 0:03:19.129 ********** 2025-05-29 01:08:05.291848 | orchestrator | changed: [testbed-node-0] 2025-05-29 01:08:05.291855 | orchestrator | changed: [testbed-node-5] 2025-05-29 01:08:05.291862 | orchestrator | changed: [testbed-node-1] 2025-05-29 01:08:05.291868 | orchestrator | changed: [testbed-manager] 2025-05-29 01:08:05.291878 | orchestrator | changed: [testbed-node-4] 2025-05-29 01:08:05.291886 | orchestrator | changed: [testbed-node-3] 2025-05-29 01:08:05.291893 | orchestrator | changed: [testbed-node-2] 2025-05-29 01:08:05.291900 | orchestrator | 2025-05-29 01:08:05.291907 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-alertmanager container] ******* 2025-05-29 01:08:05.291913 | orchestrator | Thursday 29 May 2025 01:07:16 +0000 (0:00:20.001) 0:03:39.130 ********** 2025-05-29 01:08:05.291920 | orchestrator | changed: [testbed-manager] 2025-05-29 01:08:05.291927 | orchestrator | 2025-05-29 01:08:05.291934 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-elasticsearch-exporter container] *** 2025-05-29 01:08:05.291941 | orchestrator | Thursday 29 May 2025 01:07:26 +0000 (0:00:09.695) 0:03:48.826 ********** 2025-05-29 01:08:05.291947 | orchestrator | changed: [testbed-node-2] 2025-05-29 01:08:05.291954 | orchestrator | changed: [testbed-node-0] 2025-05-29 01:08:05.291960 | orchestrator | changed: [testbed-node-1] 2025-05-29 01:08:05.291967 | orchestrator | 2025-05-29 01:08:05.291974 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-blackbox-exporter container] *** 2025-05-29 01:08:05.291980 | orchestrator | Thursday 29 May 2025 01:07:40 +0000 (0:00:14.139) 0:04:02.965 ********** 2025-05-29 01:08:05.291991 | orchestrator | changed: [testbed-manager] 2025-05-29 01:08:05.291998 | orchestrator | 2025-05-29 01:08:05.292005 | orchestrator | RUNNING HANDLER [prometheus : Restart prometheus-libvirt-exporter container] *** 2025-05-29 01:08:05.292012 | orchestrator | Thursday 29 May 2025 01:07:48 +0000 (0:00:08.026) 0:04:10.992 ********** 2025-05-29 01:08:05.292018 | orchestrator | changed: [testbed-node-3] 2025-05-29 01:08:05.292025 | orchestrator | changed: [testbed-node-4] 2025-05-29 01:08:05.292032 | orchestrator | changed: [testbed-node-5] 2025-05-29 01:08:05.292039 | orchestrator | 2025-05-29 01:08:05.292045 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-29 01:08:05.292056 | orchestrator | testbed-manager : ok=24  changed=15  unreachable=0 failed=0 skipped=9  rescued=0 ignored=0 2025-05-29 01:08:05.292064 | orchestrator | testbed-node-0 : ok=15  changed=10  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-05-29 01:08:05.292071 | orchestrator | testbed-node-1 : ok=15  changed=10  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-05-29 01:08:05.292078 | orchestrator | testbed-node-2 : ok=15  changed=10  unreachable=0 failed=0 skipped=13  rescued=0 ignored=0 2025-05-29 01:08:05.292085 | orchestrator | testbed-node-3 : ok=12  changed=7  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2025-05-29 01:08:05.292092 | orchestrator | testbed-node-4 : ok=12  changed=7  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2025-05-29 01:08:05.292099 | orchestrator | testbed-node-5 : ok=12  changed=7  unreachable=0 failed=0 skipped=14  rescued=0 ignored=0 2025-05-29 01:08:05.292105 | orchestrator | 2025-05-29 01:08:05.292112 | orchestrator | 2025-05-29 01:08:05.292119 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-29 01:08:05.292125 | orchestrator | Thursday 29 May 2025 01:08:02 +0000 (0:00:13.908) 0:04:24.900 ********** 2025-05-29 01:08:05.292132 | orchestrator | =============================================================================== 2025-05-29 01:08:05.292139 | orchestrator | prometheus : Copying over custom prometheus alert rules files ---------- 36.18s 2025-05-29 01:08:05.292146 | orchestrator | prometheus : Restart prometheus-node-exporter container ---------------- 20.72s 2025-05-29 01:08:05.292153 | orchestrator | prometheus : Restart prometheus-cadvisor container --------------------- 20.00s 2025-05-29 01:08:05.292159 | orchestrator | prometheus : Restart prometheus-server container ----------------------- 16.78s 2025-05-29 01:08:05.292166 | orchestrator | prometheus : Copying over prometheus config file ----------------------- 16.77s 2025-05-29 01:08:05.292172 | orchestrator | prometheus : Restart prometheus-memcached-exporter container ----------- 16.31s 2025-05-29 01:08:05.292179 | orchestrator | prometheus : Restart prometheus-elasticsearch-exporter container ------- 14.14s 2025-05-29 01:08:05.292186 | orchestrator | prometheus : Restart prometheus-libvirt-exporter container ------------- 13.91s 2025-05-29 01:08:05.292192 | orchestrator | prometheus : Restart prometheus-mysqld-exporter container -------------- 12.53s 2025-05-29 01:08:05.292199 | orchestrator | prometheus : Restart prometheus-alertmanager container ------------------ 9.70s 2025-05-29 01:08:05.292206 | orchestrator | prometheus : Copying over config.json files ----------------------------- 9.38s 2025-05-29 01:08:05.292212 | orchestrator | prometheus : Restart prometheus-blackbox-exporter container ------------- 8.03s 2025-05-29 01:08:05.292219 | orchestrator | service-cert-copy : prometheus | Copying over extra CA certificates ----- 6.42s 2025-05-29 01:08:05.292226 | orchestrator | prometheus : Check prometheus containers -------------------------------- 5.42s 2025-05-29 01:08:05.292232 | orchestrator | prometheus : Ensuring config directories exist -------------------------- 4.89s 2025-05-29 01:08:05.292239 | orchestrator | service-cert-copy : prometheus | Copying over backend internal TLS key --- 4.73s 2025-05-29 01:08:05.292252 | orchestrator | prometheus : Copying over prometheus alertmanager config file ----------- 4.69s 2025-05-29 01:08:05.292259 | orchestrator | prometheus : Copying config file for blackbox exporter ------------------ 4.56s 2025-05-29 01:08:05.292266 | orchestrator | prometheus : Copying over prometheus web config file -------------------- 4.22s 2025-05-29 01:08:05.292272 | orchestrator | prometheus : Copying over prometheus msteams template file -------------- 3.81s 2025-05-29 01:08:05.292283 | orchestrator | 2025-05-29 01:08:05 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:08:08.340538 | orchestrator | 2025-05-29 01:08:08 | INFO  | Task b0bf4f90-1ccd-4fbc-84c3-bf396bf3b873 is in state STARTED 2025-05-29 01:08:08.342643 | orchestrator | 2025-05-29 01:08:08 | INFO  | Task b04bb0db-c41d-423c-8f33-32174d7eae62 is in state STARTED 2025-05-29 01:08:08.345545 | orchestrator | 2025-05-29 01:08:08 | INFO  | Task 977ef4a5-1708-4276-b51b-8d515656c2d7 is in state STARTED 2025-05-29 01:08:08.350785 | orchestrator | 2025-05-29 01:08:08 | INFO  | Task 805e288f-1c93-45ff-b9d4-966879aaf853 is in state STARTED 2025-05-29 01:08:08.351653 | orchestrator | 2025-05-29 01:08:08 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:08:08.351686 | orchestrator | 2025-05-29 01:08:08 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:08:11.388660 | orchestrator | 2025-05-29 01:08:11 | INFO  | Task b0bf4f90-1ccd-4fbc-84c3-bf396bf3b873 is in state STARTED 2025-05-29 01:08:11.389620 | orchestrator | 2025-05-29 01:08:11 | INFO  | Task b04bb0db-c41d-423c-8f33-32174d7eae62 is in state STARTED 2025-05-29 01:08:11.390619 | orchestrator | 2025-05-29 01:08:11 | INFO  | Task 977ef4a5-1708-4276-b51b-8d515656c2d7 is in state STARTED 2025-05-29 01:08:11.391480 | orchestrator | 2025-05-29 01:08:11 | INFO  | Task 805e288f-1c93-45ff-b9d4-966879aaf853 is in state STARTED 2025-05-29 01:08:11.392141 | orchestrator | 2025-05-29 01:08:11 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:08:11.392153 | orchestrator | 2025-05-29 01:08:11 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:08:14.433948 | orchestrator | 2025-05-29 01:08:14 | INFO  | Task b0bf4f90-1ccd-4fbc-84c3-bf396bf3b873 is in state STARTED 2025-05-29 01:08:14.434114 | orchestrator | 2025-05-29 01:08:14 | INFO  | Task b04bb0db-c41d-423c-8f33-32174d7eae62 is in state STARTED 2025-05-29 01:08:14.434631 | orchestrator | 2025-05-29 01:08:14 | INFO  | Task 977ef4a5-1708-4276-b51b-8d515656c2d7 is in state STARTED 2025-05-29 01:08:14.435106 | orchestrator | 2025-05-29 01:08:14 | INFO  | Task 805e288f-1c93-45ff-b9d4-966879aaf853 is in state STARTED 2025-05-29 01:08:14.435731 | orchestrator | 2025-05-29 01:08:14 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:08:14.435754 | orchestrator | 2025-05-29 01:08:14 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:08:17.474509 | orchestrator | 2025-05-29 01:08:17 | INFO  | Task b0bf4f90-1ccd-4fbc-84c3-bf396bf3b873 is in state STARTED 2025-05-29 01:08:17.474929 | orchestrator | 2025-05-29 01:08:17 | INFO  | Task b04bb0db-c41d-423c-8f33-32174d7eae62 is in state STARTED 2025-05-29 01:08:17.475570 | orchestrator | 2025-05-29 01:08:17 | INFO  | Task 977ef4a5-1708-4276-b51b-8d515656c2d7 is in state STARTED 2025-05-29 01:08:17.478806 | orchestrator | 2025-05-29 01:08:17 | INFO  | Task 805e288f-1c93-45ff-b9d4-966879aaf853 is in state STARTED 2025-05-29 01:08:17.480331 | orchestrator | 2025-05-29 01:08:17 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:08:17.480839 | orchestrator | 2025-05-29 01:08:17 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:08:20.528694 | orchestrator | 2025-05-29 01:08:20 | INFO  | Task b0bf4f90-1ccd-4fbc-84c3-bf396bf3b873 is in state STARTED 2025-05-29 01:08:20.529274 | orchestrator | 2025-05-29 01:08:20 | INFO  | Task b04bb0db-c41d-423c-8f33-32174d7eae62 is in state STARTED 2025-05-29 01:08:20.530555 | orchestrator | 2025-05-29 01:08:20 | INFO  | Task 977ef4a5-1708-4276-b51b-8d515656c2d7 is in state STARTED 2025-05-29 01:08:20.531979 | orchestrator | 2025-05-29 01:08:20 | INFO  | Task 805e288f-1c93-45ff-b9d4-966879aaf853 is in state STARTED 2025-05-29 01:08:20.533298 | orchestrator | 2025-05-29 01:08:20 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:08:20.533323 | orchestrator | 2025-05-29 01:08:20 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:08:23.591831 | orchestrator | 2025-05-29 01:08:23 | INFO  | Task b0bf4f90-1ccd-4fbc-84c3-bf396bf3b873 is in state STARTED 2025-05-29 01:08:23.592045 | orchestrator | 2025-05-29 01:08:23 | INFO  | Task b04bb0db-c41d-423c-8f33-32174d7eae62 is in state STARTED 2025-05-29 01:08:23.592081 | orchestrator | 2025-05-29 01:08:23 | INFO  | Task 977ef4a5-1708-4276-b51b-8d515656c2d7 is in state STARTED 2025-05-29 01:08:23.593652 | orchestrator | 2025-05-29 01:08:23 | INFO  | Task 805e288f-1c93-45ff-b9d4-966879aaf853 is in state STARTED 2025-05-29 01:08:23.594590 | orchestrator | 2025-05-29 01:08:23 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:08:23.594619 | orchestrator | 2025-05-29 01:08:23 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:08:26.630805 | orchestrator | 2025-05-29 01:08:26 | INFO  | Task b0bf4f90-1ccd-4fbc-84c3-bf396bf3b873 is in state STARTED 2025-05-29 01:08:26.630909 | orchestrator | 2025-05-29 01:08:26 | INFO  | Task b04bb0db-c41d-423c-8f33-32174d7eae62 is in state STARTED 2025-05-29 01:08:26.631114 | orchestrator | 2025-05-29 01:08:26 | INFO  | Task 977ef4a5-1708-4276-b51b-8d515656c2d7 is in state STARTED 2025-05-29 01:08:26.631816 | orchestrator | 2025-05-29 01:08:26 | INFO  | Task 805e288f-1c93-45ff-b9d4-966879aaf853 is in state STARTED 2025-05-29 01:08:26.632334 | orchestrator | 2025-05-29 01:08:26 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:08:26.632462 | orchestrator | 2025-05-29 01:08:26 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:08:29.666337 | orchestrator | 2025-05-29 01:08:29 | INFO  | Task b0bf4f90-1ccd-4fbc-84c3-bf396bf3b873 is in state STARTED 2025-05-29 01:08:29.667100 | orchestrator | 2025-05-29 01:08:29 | INFO  | Task b04bb0db-c41d-423c-8f33-32174d7eae62 is in state STARTED 2025-05-29 01:08:29.668660 | orchestrator | 2025-05-29 01:08:29 | INFO  | Task 977ef4a5-1708-4276-b51b-8d515656c2d7 is in state STARTED 2025-05-29 01:08:29.670327 | orchestrator | 2025-05-29 01:08:29 | INFO  | Task 805e288f-1c93-45ff-b9d4-966879aaf853 is in state STARTED 2025-05-29 01:08:29.671679 | orchestrator | 2025-05-29 01:08:29 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:08:29.671811 | orchestrator | 2025-05-29 01:08:29 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:08:32.719206 | orchestrator | 2025-05-29 01:08:32 | INFO  | Task b0bf4f90-1ccd-4fbc-84c3-bf396bf3b873 is in state STARTED 2025-05-29 01:08:32.720713 | orchestrator | 2025-05-29 01:08:32 | INFO  | Task b04bb0db-c41d-423c-8f33-32174d7eae62 is in state STARTED 2025-05-29 01:08:32.722310 | orchestrator | 2025-05-29 01:08:32 | INFO  | Task 977ef4a5-1708-4276-b51b-8d515656c2d7 is in state STARTED 2025-05-29 01:08:32.723906 | orchestrator | 2025-05-29 01:08:32 | INFO  | Task 805e288f-1c93-45ff-b9d4-966879aaf853 is in state STARTED 2025-05-29 01:08:32.725113 | orchestrator | 2025-05-29 01:08:32 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:08:32.725199 | orchestrator | 2025-05-29 01:08:32 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:08:35.778317 | orchestrator | 2025-05-29 01:08:35 | INFO  | Task b0bf4f90-1ccd-4fbc-84c3-bf396bf3b873 is in state STARTED 2025-05-29 01:08:35.779297 | orchestrator | 2025-05-29 01:08:35 | INFO  | Task b04bb0db-c41d-423c-8f33-32174d7eae62 is in state STARTED 2025-05-29 01:08:35.781685 | orchestrator | 2025-05-29 01:08:35 | INFO  | Task 977ef4a5-1708-4276-b51b-8d515656c2d7 is in state STARTED 2025-05-29 01:08:35.783775 | orchestrator | 2025-05-29 01:08:35 | INFO  | Task 805e288f-1c93-45ff-b9d4-966879aaf853 is in state STARTED 2025-05-29 01:08:35.785242 | orchestrator | 2025-05-29 01:08:35 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:08:35.785264 | orchestrator | 2025-05-29 01:08:35 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:08:38.840397 | orchestrator | 2025-05-29 01:08:38 | INFO  | Task c9a2808c-a7e5-4b31-bdd4-1912afbb585a is in state STARTED 2025-05-29 01:08:38.842218 | orchestrator | 2025-05-29 01:08:38 | INFO  | Task b0bf4f90-1ccd-4fbc-84c3-bf396bf3b873 is in state SUCCESS 2025-05-29 01:08:38.845646 | orchestrator | 2025-05-29 01:08:38.845729 | orchestrator | 2025-05-29 01:08:38.845747 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-29 01:08:38.845760 | orchestrator | 2025-05-29 01:08:38.845770 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-29 01:08:38.845782 | orchestrator | Thursday 29 May 2025 01:05:16 +0000 (0:00:00.466) 0:00:00.466 ********** 2025-05-29 01:08:38.845793 | orchestrator | ok: [testbed-node-0] 2025-05-29 01:08:38.845806 | orchestrator | ok: [testbed-node-1] 2025-05-29 01:08:38.845816 | orchestrator | ok: [testbed-node-2] 2025-05-29 01:08:38.845827 | orchestrator | 2025-05-29 01:08:38.845838 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-29 01:08:38.845849 | orchestrator | Thursday 29 May 2025 01:05:16 +0000 (0:00:00.502) 0:00:00.969 ********** 2025-05-29 01:08:38.845861 | orchestrator | ok: [testbed-node-0] => (item=enable_glance_True) 2025-05-29 01:08:38.845873 | orchestrator | ok: [testbed-node-1] => (item=enable_glance_True) 2025-05-29 01:08:38.845884 | orchestrator | ok: [testbed-node-2] => (item=enable_glance_True) 2025-05-29 01:08:38.845894 | orchestrator | 2025-05-29 01:08:38.845906 | orchestrator | PLAY [Apply role glance] ******************************************************* 2025-05-29 01:08:38.845917 | orchestrator | 2025-05-29 01:08:38.845928 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-05-29 01:08:38.845940 | orchestrator | Thursday 29 May 2025 01:05:17 +0000 (0:00:00.325) 0:00:01.294 ********** 2025-05-29 01:08:38.845979 | orchestrator | included: /ansible/roles/glance/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-29 01:08:38.845992 | orchestrator | 2025-05-29 01:08:38.846003 | orchestrator | TASK [service-ks-register : glance | Creating services] ************************ 2025-05-29 01:08:38.846061 | orchestrator | Thursday 29 May 2025 01:05:17 +0000 (0:00:00.811) 0:00:02.106 ********** 2025-05-29 01:08:38.846079 | orchestrator | changed: [testbed-node-0] => (item=glance (image)) 2025-05-29 01:08:38.846090 | orchestrator | 2025-05-29 01:08:38.846101 | orchestrator | TASK [service-ks-register : glance | Creating endpoints] *********************** 2025-05-29 01:08:38.846113 | orchestrator | Thursday 29 May 2025 01:05:21 +0000 (0:00:03.590) 0:00:05.696 ********** 2025-05-29 01:08:38.846124 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api-int.testbed.osism.xyz:9292 -> internal) 2025-05-29 01:08:38.846137 | orchestrator | changed: [testbed-node-0] => (item=glance -> https://api.testbed.osism.xyz:9292 -> public) 2025-05-29 01:08:38.846148 | orchestrator | 2025-05-29 01:08:38.846160 | orchestrator | TASK [service-ks-register : glance | Creating projects] ************************ 2025-05-29 01:08:38.846173 | orchestrator | Thursday 29 May 2025 01:05:27 +0000 (0:00:06.400) 0:00:12.097 ********** 2025-05-29 01:08:38.846211 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-05-29 01:08:38.846225 | orchestrator | 2025-05-29 01:08:38.846240 | orchestrator | TASK [service-ks-register : glance | Creating users] *************************** 2025-05-29 01:08:38.846255 | orchestrator | Thursday 29 May 2025 01:05:31 +0000 (0:00:03.415) 0:00:15.513 ********** 2025-05-29 01:08:38.846269 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-29 01:08:38.846299 | orchestrator | changed: [testbed-node-0] => (item=glance -> service) 2025-05-29 01:08:38.846314 | orchestrator | 2025-05-29 01:08:38.846328 | orchestrator | TASK [service-ks-register : glance | Creating roles] *************************** 2025-05-29 01:08:38.846342 | orchestrator | Thursday 29 May 2025 01:05:35 +0000 (0:00:04.204) 0:00:19.717 ********** 2025-05-29 01:08:38.846356 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-05-29 01:08:38.846370 | orchestrator | 2025-05-29 01:08:38.846384 | orchestrator | TASK [service-ks-register : glance | Granting user roles] ********************** 2025-05-29 01:08:38.846398 | orchestrator | Thursday 29 May 2025 01:05:39 +0000 (0:00:03.865) 0:00:23.583 ********** 2025-05-29 01:08:38.846412 | orchestrator | changed: [testbed-node-0] => (item=glance -> service -> admin) 2025-05-29 01:08:38.846426 | orchestrator | 2025-05-29 01:08:38.846905 | orchestrator | TASK [glance : Ensuring config directories exist] ****************************** 2025-05-29 01:08:38.846922 | orchestrator | Thursday 29 May 2025 01:05:43 +0000 (0:00:04.394) 0:00:27.978 ********** 2025-05-29 01:08:38.846979 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-29 01:08:38.847010 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-29 01:08:38.847116 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-29 01:08:38.847246 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-29 01:08:38.847605 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-29 01:08:38.847675 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-29 01:08:38.847696 | orchestrator | 2025-05-29 01:08:38.847708 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-05-29 01:08:38.847720 | orchestrator | Thursday 29 May 2025 01:05:48 +0000 (0:00:04.629) 0:00:32.607 ********** 2025-05-29 01:08:38.847732 | orchestrator | included: /ansible/roles/glance/tasks/external_ceph.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-29 01:08:38.847776 | orchestrator | 2025-05-29 01:08:38.847922 | orchestrator | TASK [glance : Ensuring glance service ceph config subdir exists] ************** 2025-05-29 01:08:38.847934 | orchestrator | Thursday 29 May 2025 01:05:48 +0000 (0:00:00.495) 0:00:33.102 ********** 2025-05-29 01:08:38.847942 | orchestrator | changed: [testbed-node-0] 2025-05-29 01:08:38.848167 | orchestrator | changed: [testbed-node-1] 2025-05-29 01:08:38.848183 | orchestrator | changed: [testbed-node-2] 2025-05-29 01:08:38.848194 | orchestrator | 2025-05-29 01:08:38.848205 | orchestrator | TASK [glance : Copy over multiple ceph configs for Glance] ********************* 2025-05-29 01:08:38.848216 | orchestrator | Thursday 29 May 2025 01:05:55 +0000 (0:00:06.382) 0:00:39.485 ********** 2025-05-29 01:08:38.848233 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-05-29 01:08:38.848245 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-05-29 01:08:38.848255 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-05-29 01:08:38.848266 | orchestrator | 2025-05-29 01:08:38.848276 | orchestrator | TASK [glance : Copy over ceph Glance keyrings] ********************************* 2025-05-29 01:08:38.848286 | orchestrator | Thursday 29 May 2025 01:05:56 +0000 (0:00:01.685) 0:00:41.170 ********** 2025-05-29 01:08:38.848296 | orchestrator | changed: [testbed-node-0] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-05-29 01:08:38.848305 | orchestrator | changed: [testbed-node-1] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-05-29 01:08:38.848315 | orchestrator | changed: [testbed-node-2] => (item={'name': 'rbd', 'type': 'rbd', 'cluster': 'ceph', 'enabled': True}) 2025-05-29 01:08:38.848324 | orchestrator | 2025-05-29 01:08:38.848333 | orchestrator | TASK [glance : Ensuring config directory has correct owner and permission] ***** 2025-05-29 01:08:38.848343 | orchestrator | Thursday 29 May 2025 01:05:57 +0000 (0:00:01.033) 0:00:42.203 ********** 2025-05-29 01:08:38.848353 | orchestrator | ok: [testbed-node-0] 2025-05-29 01:08:38.848364 | orchestrator | ok: [testbed-node-1] 2025-05-29 01:08:38.848373 | orchestrator | ok: [testbed-node-2] 2025-05-29 01:08:38.848383 | orchestrator | 2025-05-29 01:08:38.848392 | orchestrator | TASK [glance : Check if policies shall be overwritten] ************************* 2025-05-29 01:08:38.848402 | orchestrator | Thursday 29 May 2025 01:05:58 +0000 (0:00:00.793) 0:00:42.997 ********** 2025-05-29 01:08:38.848411 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:08:38.848420 | orchestrator | 2025-05-29 01:08:38.848429 | orchestrator | TASK [glance : Set glance policy file] ***************************************** 2025-05-29 01:08:38.848541 | orchestrator | Thursday 29 May 2025 01:05:58 +0000 (0:00:00.109) 0:00:43.106 ********** 2025-05-29 01:08:38.848555 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:08:38.848565 | orchestrator | skipping: [testbed-node-1] 2025-05-29 01:08:38.848575 | orchestrator | skipping: [testbed-node-2] 2025-05-29 01:08:38.848585 | orchestrator | 2025-05-29 01:08:38.848595 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-05-29 01:08:38.848605 | orchestrator | Thursday 29 May 2025 01:05:59 +0000 (0:00:00.414) 0:00:43.521 ********** 2025-05-29 01:08:38.848615 | orchestrator | included: /ansible/roles/glance/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-29 01:08:38.848638 | orchestrator | 2025-05-29 01:08:38.848648 | orchestrator | TASK [service-cert-copy : glance | Copying over extra CA certificates] ********* 2025-05-29 01:08:38.848657 | orchestrator | Thursday 29 May 2025 01:06:00 +0000 (0:00:00.753) 0:00:44.274 ********** 2025-05-29 01:08:38.848733 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-29 01:08:38.848758 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-29 01:08:38.849201 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-29 01:08:38.849243 | orchestrator | 2025-05-29 01:08:38.849255 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS certificate] *** 2025-05-29 01:08:38.849265 | orchestrator | Thursday 29 May 2025 01:06:04 +0000 (0:00:04.135) 0:00:48.409 ********** 2025-05-29 01:08:38.849284 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-05-29 01:08:38.849295 | orchestrator | skipping: [testbed-node-1] 2025-05-29 01:08:38.849400 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-05-29 01:08:38.849509 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:08:38.849531 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-05-29 01:08:38.849543 | orchestrator | skipping: [testbed-node-2] 2025-05-29 01:08:38.849553 | orchestrator | 2025-05-29 01:08:38.849564 | orchestrator | TASK [service-cert-copy : glance | Copying over backend internal TLS key] ****** 2025-05-29 01:08:38.849574 | orchestrator | Thursday 29 May 2025 01:06:10 +0000 (0:00:06.379) 0:00:54.788 ********** 2025-05-29 01:08:38.849662 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-05-29 01:08:38.849690 | orchestrator | skipping: [testbed-node-1] 2025-05-29 01:08:38.849702 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-05-29 01:08:38.849714 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:08:38.849729 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}})  2025-05-29 01:08:38.849746 | orchestrator | skipping: [testbed-node-2] 2025-05-29 01:08:38.849755 | orchestrator | 2025-05-29 01:08:38.849765 | orchestrator | TASK [glance : Creating TLS backend PEM File] ********************************** 2025-05-29 01:08:38.849775 | orchestrator | Thursday 29 May 2025 01:06:19 +0000 (0:00:09.091) 0:01:03.880 ********** 2025-05-29 01:08:38.849785 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:08:38.849795 | orchestrator | skipping: [testbed-node-2] 2025-05-29 01:08:38.849805 | orchestrator | skipping: [testbed-node-1] 2025-05-29 01:08:38.849814 | orchestrator | 2025-05-29 01:08:38.849883 | orchestrator | TASK [glance : Copying over config.json files for services] ******************** 2025-05-29 01:08:38.849895 | orchestrator | Thursday 29 May 2025 01:06:24 +0000 (0:00:04.522) 0:01:08.402 ********** 2025-05-29 01:08:38.849905 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-29 01:08:38.849923 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-29 01:08:38.850041 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-29 01:08:38.850067 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-29 01:08:38.850154 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-29 01:08:38.850174 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-29 01:08:38.850184 | orchestrator | 2025-05-29 01:08:38.850200 | orchestrator | TASK [glance : Copying over glance-api.conf] *********************************** 2025-05-29 01:08:38.850209 | orchestrator | Thursday 29 May 2025 01:06:28 +0000 (0:00:04.525) 0:01:12.928 ********** 2025-05-29 01:08:38.850217 | orchestrator | changed: [testbed-node-0] 2025-05-29 01:08:38.850225 | orchestrator | changed: [testbed-node-2] 2025-05-29 01:08:38.850234 | orchestrator | changed: [testbed-node-1] 2025-05-29 01:08:38.850241 | orchestrator | 2025-05-29 01:08:38.850250 | orchestrator | TASK [glance : Copying over glance-cache.conf for glance_api] ****************** 2025-05-29 01:08:38.850258 | orchestrator | Thursday 29 May 2025 01:06:40 +0000 (0:00:12.012) 0:01:24.941 ********** 2025-05-29 01:08:38.850267 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:08:38.850295 | orchestrator | skipping: [testbed-node-2] 2025-05-29 01:08:38.850304 | orchestrator | skipping: [testbed-node-1] 2025-05-29 01:08:38.850313 | orchestrator | 2025-05-29 01:08:38.850323 | orchestrator | TASK [glance : Copying over glance-swift.conf for glance_api] ****************** 2025-05-29 01:08:38.850332 | orchestrator | Thursday 29 May 2025 01:06:53 +0000 (0:00:12.634) 0:01:37.576 ********** 2025-05-29 01:08:38.850341 | orchestrator | skipping: [testbed-node-2] 2025-05-29 01:08:38.850350 | orchestrator | skipping: [testbed-node-1] 2025-05-29 01:08:38.850360 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:08:38.850369 | orchestrator | 2025-05-29 01:08:38.850379 | orchestrator | TASK [glance : Copying over glance-image-import.conf] ************************** 2025-05-29 01:08:38.850388 | orchestrator | Thursday 29 May 2025 01:07:05 +0000 (0:00:12.621) 0:01:50.198 ********** 2025-05-29 01:08:38.850398 | orchestrator | skipping: [testbed-node-1] 2025-05-29 01:08:38.850407 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:08:38.850416 | orchestrator | skipping: [testbed-node-2] 2025-05-29 01:08:38.850426 | orchestrator | 2025-05-29 01:08:38.850435 | orchestrator | TASK [glance : Copying over property-protections-rules.conf] ******************* 2025-05-29 01:08:38.850445 | orchestrator | Thursday 29 May 2025 01:07:13 +0000 (0:00:07.259) 0:01:57.458 ********** 2025-05-29 01:08:38.850454 | orchestrator | skipping: [testbed-node-1] 2025-05-29 01:08:38.850558 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:08:38.850572 | orchestrator | skipping: [testbed-node-2] 2025-05-29 01:08:38.850581 | orchestrator | 2025-05-29 01:08:38.850589 | orchestrator | TASK [glance : Copying over existing policy file] ****************************** 2025-05-29 01:08:38.850598 | orchestrator | Thursday 29 May 2025 01:07:19 +0000 (0:00:05.793) 0:02:03.251 ********** 2025-05-29 01:08:38.850606 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:08:38.850614 | orchestrator | skipping: [testbed-node-1] 2025-05-29 01:08:38.850622 | orchestrator | skipping: [testbed-node-2] 2025-05-29 01:08:38.850630 | orchestrator | 2025-05-29 01:08:38.850639 | orchestrator | TASK [glance : Copying over glance-haproxy-tls.cfg] **************************** 2025-05-29 01:08:38.850647 | orchestrator | Thursday 29 May 2025 01:07:19 +0000 (0:00:00.286) 0:02:03.538 ********** 2025-05-29 01:08:38.850655 | orchestrator | skipping: [testbed-node-0] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-05-29 01:08:38.850664 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:08:38.850672 | orchestrator | skipping: [testbed-node-1] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-05-29 01:08:38.850681 | orchestrator | skipping: [testbed-node-1] 2025-05-29 01:08:38.850691 | orchestrator | skipping: [testbed-node-2] => (item=/ansible/roles/glance/templates/glance-tls-proxy.cfg.j2)  2025-05-29 01:08:38.850700 | orchestrator | skipping: [testbed-node-2] 2025-05-29 01:08:38.850709 | orchestrator | 2025-05-29 01:08:38.850718 | orchestrator | TASK [glance : Check glance containers] **************************************** 2025-05-29 01:08:38.850727 | orchestrator | Thursday 29 May 2025 01:07:22 +0000 (0:00:03.345) 0:02:06.884 ********** 2025-05-29 01:08:38.850746 | orchestrator | changed: [testbed-node-0] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.10,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-29 01:08:38.850798 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.10:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-29 01:08:38.850816 | orchestrator | changed: [testbed-node-2] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.12,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-29 01:08:38.850856 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.12:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-29 01:08:38.850869 | orchestrator | changed: [testbed-node-1] => (item={'key': 'glance-api', 'value': {'container_name': 'glance_api', 'group': 'glance-api', 'host_in_groups': True, 'enabled': True, 'image': 'registry.osism.tech/kolla/release/glance-api:28.1.1.20241206', 'environment': {'http_proxy': '', 'https_proxy': '', 'no_proxy': 'localhost,127.0.0.1,192.168.16.11,192.168.16.9'}, 'privileged': True, 'volumes': ['/etc/kolla/glance-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'glance:/var/lib/glance/', '', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', '/dev:/dev'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:9292'], 'timeout': '30'}, 'haproxy': {'glance_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}, 'glance_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5', '']}}}}) 2025-05-29 01:08:38.850889 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'glance-tls-proxy', 'value': {'container_name': 'glance_tls_proxy', 'group': 'glance-api', 'host_in_groups': True, 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/glance-tls-proxy:28.1.1.20241206', 'volumes': ['/etc/kolla/glance-tls-proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl -u openstack:password 192.168.16.11:9293'], 'timeout': '30'}, 'haproxy': {'glance_tls_proxy': {'enabled': False, 'mode': 'http', 'external': False, 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}, 'glance_tls_proxy_external': {'enabled': False, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '9292', 'frontend_http_extra': ['timeout client 6h'], 'backend_http_extra': ['timeout server 6h'], 'custom_member_list': ['server testbed-node-0 192.168.16.10:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-1 192.168.16.11:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', 'server testbed-node-2 192.168.16.12:9292 check inter 2000 rise 2 fall 5 ssl verify required ca-file ca-certificates.crt', ''], 'tls_backend': 'yes'}}}})  2025-05-29 01:08:38.850899 | orchestrator | 2025-05-29 01:08:38.850908 | orchestrator | TASK [glance : include_tasks] ************************************************** 2025-05-29 01:08:38.850936 | orchestrator | Thursday 29 May 2025 01:07:29 +0000 (0:00:06.701) 0:02:13.585 ********** 2025-05-29 01:08:38.850947 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:08:38.850956 | orchestrator | skipping: [testbed-node-1] 2025-05-29 01:08:38.850965 | orchestrator | skipping: [testbed-node-2] 2025-05-29 01:08:38.850974 | orchestrator | 2025-05-29 01:08:38.851041 | orchestrator | TASK [glance : Creating Glance database] *************************************** 2025-05-29 01:08:38.851054 | orchestrator | Thursday 29 May 2025 01:07:30 +0000 (0:00:00.868) 0:02:14.453 ********** 2025-05-29 01:08:38.851062 | orchestrator | changed: [testbed-node-0] 2025-05-29 01:08:38.851070 | orchestrator | 2025-05-29 01:08:38.851078 | orchestrator | TASK [glance : Creating Glance database user and setting permissions] ********** 2025-05-29 01:08:38.851086 | orchestrator | Thursday 29 May 2025 01:07:32 +0000 (0:00:02.405) 0:02:16.859 ********** 2025-05-29 01:08:38.851095 | orchestrator | changed: [testbed-node-0] 2025-05-29 01:08:38.851104 | orchestrator | 2025-05-29 01:08:38.851113 | orchestrator | TASK [glance : Enable log_bin_trust_function_creators function] **************** 2025-05-29 01:08:38.851121 | orchestrator | Thursday 29 May 2025 01:07:35 +0000 (0:00:02.648) 0:02:19.508 ********** 2025-05-29 01:08:38.851130 | orchestrator | changed: [testbed-node-0] 2025-05-29 01:08:38.851139 | orchestrator | 2025-05-29 01:08:38.851147 | orchestrator | TASK [glance : Running Glance bootstrap container] ***************************** 2025-05-29 01:08:38.851156 | orchestrator | Thursday 29 May 2025 01:07:37 +0000 (0:00:02.247) 0:02:21.755 ********** 2025-05-29 01:08:38.851177 | orchestrator | changed: [testbed-node-0] 2025-05-29 01:08:38.851187 | orchestrator | 2025-05-29 01:08:38.851195 | orchestrator | TASK [glance : Disable log_bin_trust_function_creators function] *************** 2025-05-29 01:08:38.851204 | orchestrator | Thursday 29 May 2025 01:08:04 +0000 (0:00:26.934) 0:02:48.690 ********** 2025-05-29 01:08:38.851212 | orchestrator | changed: [testbed-node-0] 2025-05-29 01:08:38.851221 | orchestrator | 2025-05-29 01:08:38.851230 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-05-29 01:08:38.851239 | orchestrator | Thursday 29 May 2025 01:08:06 +0000 (0:00:02.311) 0:02:51.002 ********** 2025-05-29 01:08:38.851247 | orchestrator | 2025-05-29 01:08:38.851256 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-05-29 01:08:38.851264 | orchestrator | Thursday 29 May 2025 01:08:06 +0000 (0:00:00.074) 0:02:51.077 ********** 2025-05-29 01:08:38.851273 | orchestrator | 2025-05-29 01:08:38.851281 | orchestrator | TASK [glance : Flush handlers] ************************************************* 2025-05-29 01:08:38.851289 | orchestrator | Thursday 29 May 2025 01:08:06 +0000 (0:00:00.063) 0:02:51.140 ********** 2025-05-29 01:08:38.851298 | orchestrator | 2025-05-29 01:08:38.851306 | orchestrator | RUNNING HANDLER [glance : Restart glance-api container] ************************ 2025-05-29 01:08:38.851315 | orchestrator | Thursday 29 May 2025 01:08:07 +0000 (0:00:00.232) 0:02:51.373 ********** 2025-05-29 01:08:38.851323 | orchestrator | changed: [testbed-node-0] 2025-05-29 01:08:38.851332 | orchestrator | changed: [testbed-node-1] 2025-05-29 01:08:38.851341 | orchestrator | changed: [testbed-node-2] 2025-05-29 01:08:38.851351 | orchestrator | 2025-05-29 01:08:38.851360 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-29 01:08:38.851371 | orchestrator | testbed-node-0 : ok=26  changed=18  unreachable=0 failed=0 skipped=12  rescued=0 ignored=0 2025-05-29 01:08:38.851387 | orchestrator | testbed-node-1 : ok=15  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-05-29 01:08:38.851396 | orchestrator | testbed-node-2 : ok=15  changed=9  unreachable=0 failed=0 skipped=11  rescued=0 ignored=0 2025-05-29 01:08:38.851405 | orchestrator | 2025-05-29 01:08:38.851414 | orchestrator | 2025-05-29 01:08:38.851423 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-29 01:08:38.851431 | orchestrator | Thursday 29 May 2025 01:08:36 +0000 (0:00:29.538) 0:03:20.912 ********** 2025-05-29 01:08:38.851439 | orchestrator | =============================================================================== 2025-05-29 01:08:38.851447 | orchestrator | glance : Restart glance-api container ---------------------------------- 29.54s 2025-05-29 01:08:38.851456 | orchestrator | glance : Running Glance bootstrap container ---------------------------- 26.93s 2025-05-29 01:08:38.851464 | orchestrator | glance : Copying over glance-cache.conf for glance_api ----------------- 12.63s 2025-05-29 01:08:38.851530 | orchestrator | glance : Copying over glance-swift.conf for glance_api ----------------- 12.62s 2025-05-29 01:08:38.851539 | orchestrator | glance : Copying over glance-api.conf ---------------------------------- 12.01s 2025-05-29 01:08:38.851547 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS key ------ 9.09s 2025-05-29 01:08:38.851555 | orchestrator | glance : Copying over glance-image-import.conf -------------------------- 7.26s 2025-05-29 01:08:38.851564 | orchestrator | glance : Check glance containers ---------------------------------------- 6.70s 2025-05-29 01:08:38.851573 | orchestrator | service-ks-register : glance | Creating endpoints ----------------------- 6.40s 2025-05-29 01:08:38.851582 | orchestrator | glance : Ensuring glance service ceph config subdir exists -------------- 6.38s 2025-05-29 01:08:38.851591 | orchestrator | service-cert-copy : glance | Copying over backend internal TLS certificate --- 6.38s 2025-05-29 01:08:38.851599 | orchestrator | glance : Copying over property-protections-rules.conf ------------------- 5.79s 2025-05-29 01:08:38.851607 | orchestrator | glance : Ensuring config directories exist ------------------------------ 4.63s 2025-05-29 01:08:38.851627 | orchestrator | glance : Copying over config.json files for services -------------------- 4.53s 2025-05-29 01:08:38.851635 | orchestrator | glance : Creating TLS backend PEM File ---------------------------------- 4.52s 2025-05-29 01:08:38.851643 | orchestrator | service-ks-register : glance | Granting user roles ---------------------- 4.39s 2025-05-29 01:08:38.851651 | orchestrator | service-ks-register : glance | Creating users --------------------------- 4.20s 2025-05-29 01:08:38.851659 | orchestrator | service-cert-copy : glance | Copying over extra CA certificates --------- 4.13s 2025-05-29 01:08:38.851667 | orchestrator | service-ks-register : glance | Creating roles --------------------------- 3.87s 2025-05-29 01:08:38.851729 | orchestrator | service-ks-register : glance | Creating services ------------------------ 3.59s 2025-05-29 01:08:38.851741 | orchestrator | 2025-05-29 01:08:38 | INFO  | Task b04bb0db-c41d-423c-8f33-32174d7eae62 is in state SUCCESS 2025-05-29 01:08:38.851752 | orchestrator | 2025-05-29 01:08:38.851762 | orchestrator | 2025-05-29 01:08:38.851772 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-29 01:08:38.851783 | orchestrator | 2025-05-29 01:08:38.851793 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-29 01:08:38.851802 | orchestrator | Thursday 29 May 2025 01:05:31 +0000 (0:00:00.345) 0:00:00.345 ********** 2025-05-29 01:08:38.851811 | orchestrator | ok: [testbed-node-0] 2025-05-29 01:08:38.851821 | orchestrator | ok: [testbed-node-1] 2025-05-29 01:08:38.851830 | orchestrator | ok: [testbed-node-2] 2025-05-29 01:08:38.851839 | orchestrator | ok: [testbed-node-3] 2025-05-29 01:08:38.851848 | orchestrator | ok: [testbed-node-4] 2025-05-29 01:08:38.851857 | orchestrator | ok: [testbed-node-5] 2025-05-29 01:08:38.851866 | orchestrator | 2025-05-29 01:08:38.851875 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-29 01:08:38.851906 | orchestrator | Thursday 29 May 2025 01:05:32 +0000 (0:00:00.638) 0:00:00.983 ********** 2025-05-29 01:08:38.851916 | orchestrator | ok: [testbed-node-0] => (item=enable_cinder_True) 2025-05-29 01:08:38.851927 | orchestrator | ok: [testbed-node-1] => (item=enable_cinder_True) 2025-05-29 01:08:38.851937 | orchestrator | ok: [testbed-node-2] => (item=enable_cinder_True) 2025-05-29 01:08:38.851945 | orchestrator | ok: [testbed-node-3] => (item=enable_cinder_True) 2025-05-29 01:08:38.851953 | orchestrator | ok: [testbed-node-4] => (item=enable_cinder_True) 2025-05-29 01:08:38.851961 | orchestrator | ok: [testbed-node-5] => (item=enable_cinder_True) 2025-05-29 01:08:38.851969 | orchestrator | 2025-05-29 01:08:38.851978 | orchestrator | PLAY [Apply role cinder] ******************************************************* 2025-05-29 01:08:38.851987 | orchestrator | 2025-05-29 01:08:38.851996 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-05-29 01:08:38.852006 | orchestrator | Thursday 29 May 2025 01:05:33 +0000 (0:00:00.832) 0:00:01.816 ********** 2025-05-29 01:08:38.852015 | orchestrator | included: /ansible/roles/cinder/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-29 01:08:38.852026 | orchestrator | 2025-05-29 01:08:38.852035 | orchestrator | TASK [service-ks-register : cinder | Creating services] ************************ 2025-05-29 01:08:38.852043 | orchestrator | Thursday 29 May 2025 01:05:34 +0000 (0:00:01.057) 0:00:02.873 ********** 2025-05-29 01:08:38.852052 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 (volumev3)) 2025-05-29 01:08:38.852061 | orchestrator | 2025-05-29 01:08:38.852069 | orchestrator | TASK [service-ks-register : cinder | Creating endpoints] *********************** 2025-05-29 01:08:38.852078 | orchestrator | Thursday 29 May 2025 01:05:37 +0000 (0:00:03.188) 0:00:06.062 ********** 2025-05-29 01:08:38.852086 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api-int.testbed.osism.xyz:8776/v3/%(tenant_id)s -> internal) 2025-05-29 01:08:38.852103 | orchestrator | changed: [testbed-node-0] => (item=cinderv3 -> https://api.testbed.osism.xyz:8776/v3/%(tenant_id)s -> public) 2025-05-29 01:08:38.852112 | orchestrator | 2025-05-29 01:08:38.852129 | orchestrator | TASK [service-ks-register : cinder | Creating projects] ************************ 2025-05-29 01:08:38.852138 | orchestrator | Thursday 29 May 2025 01:05:44 +0000 (0:00:06.821) 0:00:12.884 ********** 2025-05-29 01:08:38.852147 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-05-29 01:08:38.852260 | orchestrator | 2025-05-29 01:08:38.852272 | orchestrator | TASK [service-ks-register : cinder | Creating users] *************************** 2025-05-29 01:08:38.852282 | orchestrator | Thursday 29 May 2025 01:05:47 +0000 (0:00:03.358) 0:00:16.243 ********** 2025-05-29 01:08:38.852290 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-29 01:08:38.852298 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service) 2025-05-29 01:08:38.852307 | orchestrator | 2025-05-29 01:08:38.852315 | orchestrator | TASK [service-ks-register : cinder | Creating roles] *************************** 2025-05-29 01:08:38.852369 | orchestrator | Thursday 29 May 2025 01:05:51 +0000 (0:00:03.841) 0:00:20.084 ********** 2025-05-29 01:08:38.852379 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-05-29 01:08:38.852387 | orchestrator | 2025-05-29 01:08:38.852396 | orchestrator | TASK [service-ks-register : cinder | Granting user roles] ********************** 2025-05-29 01:08:38.852404 | orchestrator | Thursday 29 May 2025 01:05:54 +0000 (0:00:03.198) 0:00:23.283 ********** 2025-05-29 01:08:38.852413 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> admin) 2025-05-29 01:08:38.852422 | orchestrator | changed: [testbed-node-0] => (item=cinder -> service -> service) 2025-05-29 01:08:38.852431 | orchestrator | 2025-05-29 01:08:38.852439 | orchestrator | TASK [cinder : Ensuring config directories exist] ****************************** 2025-05-29 01:08:38.852447 | orchestrator | Thursday 29 May 2025 01:06:02 +0000 (0:00:08.132) 0:00:31.416 ********** 2025-05-29 01:08:38.852597 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-29 01:08:38.852618 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-29 01:08:38.852629 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-29 01:08:38.852655 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-29 01:08:38.852666 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-29 01:08:38.852675 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-29 01:08:38.852716 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-29 01:08:38.852729 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-29 01:08:38.852750 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-29 01:08:38.852760 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-29 01:08:38.852769 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-29 01:08:38.852802 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-29 01:08:38.852812 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-29 01:08:38.852828 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-29 01:08:38.852841 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-29 01:08:38.852850 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-29 01:08:38.852883 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-29 01:08:38.852894 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-29 01:08:38.852911 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-29 01:08:38.852924 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-29 01:08:38.852933 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-29 01:08:38.852943 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-29 01:08:38.852974 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-29 01:08:38.852984 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-29 01:08:38.852999 | orchestrator | 2025-05-29 01:08:38.853008 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-05-29 01:08:38.853017 | orchestrator | Thursday 29 May 2025 01:06:05 +0000 (0:00:02.062) 0:00:33.479 ********** 2025-05-29 01:08:38.853026 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:08:38.853034 | orchestrator | skipping: [testbed-node-1] 2025-05-29 01:08:38.853043 | orchestrator | skipping: [testbed-node-2] 2025-05-29 01:08:38.853052 | orchestrator | included: /ansible/roles/cinder/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-29 01:08:38.853060 | orchestrator | 2025-05-29 01:08:38.853068 | orchestrator | TASK [cinder : Ensuring cinder service ceph config subdirs exists] ************* 2025-05-29 01:08:38.853077 | orchestrator | Thursday 29 May 2025 01:06:06 +0000 (0:00:01.180) 0:00:34.660 ********** 2025-05-29 01:08:38.853085 | orchestrator | changed: [testbed-node-5] => (item=cinder-volume) 2025-05-29 01:08:38.853094 | orchestrator | changed: [testbed-node-3] => (item=cinder-volume) 2025-05-29 01:08:38.853106 | orchestrator | changed: [testbed-node-4] => (item=cinder-volume) 2025-05-29 01:08:38.853115 | orchestrator | changed: [testbed-node-4] => (item=cinder-backup) 2025-05-29 01:08:38.853123 | orchestrator | changed: [testbed-node-3] => (item=cinder-backup) 2025-05-29 01:08:38.853132 | orchestrator | changed: [testbed-node-5] => (item=cinder-backup) 2025-05-29 01:08:38.853141 | orchestrator | 2025-05-29 01:08:38.853149 | orchestrator | TASK [cinder : Copying over multiple ceph.conf for cinder services] ************ 2025-05-29 01:08:38.853158 | orchestrator | Thursday 29 May 2025 01:06:11 +0000 (0:00:04.825) 0:00:39.485 ********** 2025-05-29 01:08:38.853168 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-05-29 01:08:38.853202 | orchestrator | skipping: [testbed-node-3] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-05-29 01:08:38.853214 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-05-29 01:08:38.853234 | orchestrator | skipping: [testbed-node-4] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-05-29 01:08:38.853249 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-05-29 01:08:38.853259 | orchestrator | skipping: [testbed-node-5] => (item=[{'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}])  2025-05-29 01:08:38.853291 | orchestrator | changed: [testbed-node-4] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-05-29 01:08:38.853302 | orchestrator | changed: [testbed-node-3] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-05-29 01:08:38.853317 | orchestrator | changed: [testbed-node-5] => (item=[{'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-05-29 01:08:38.853330 | orchestrator | changed: [testbed-node-3] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-05-29 01:08:38.853340 | orchestrator | changed: [testbed-node-4] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-05-29 01:08:38.853371 | orchestrator | changed: [testbed-node-5] => (item=[{'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}, {'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}]) 2025-05-29 01:08:38.853389 | orchestrator | 2025-05-29 01:08:38.853398 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-volume] ***************** 2025-05-29 01:08:38.853407 | orchestrator | Thursday 29 May 2025 01:06:17 +0000 (0:00:06.109) 0:00:45.595 ********** 2025-05-29 01:08:38.853415 | orchestrator | changed: [testbed-node-3] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-05-29 01:08:38.853425 | orchestrator | changed: [testbed-node-4] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-05-29 01:08:38.853434 | orchestrator | changed: [testbed-node-5] => (item={'name': 'rbd-1', 'cluster': 'ceph', 'enabled': True}) 2025-05-29 01:08:38.853443 | orchestrator | 2025-05-29 01:08:38.853452 | orchestrator | TASK [cinder : Copy over Ceph keyring files for cinder-backup] ***************** 2025-05-29 01:08:38.853461 | orchestrator | Thursday 29 May 2025 01:06:20 +0000 (0:00:03.008) 0:00:48.604 ********** 2025-05-29 01:08:38.853496 | orchestrator | changed: [testbed-node-3] => (item=ceph.client.cinder.keyring) 2025-05-29 01:08:38.853504 | orchestrator | changed: [testbed-node-4] => (item=ceph.client.cinder.keyring) 2025-05-29 01:08:38.853512 | orchestrator | changed: [testbed-node-5] => (item=ceph.client.cinder.keyring) 2025-05-29 01:08:38.853520 | orchestrator | changed: [testbed-node-3] => (item=ceph.client.cinder-backup.keyring) 2025-05-29 01:08:38.853528 | orchestrator | changed: [testbed-node-4] => (item=ceph.client.cinder-backup.keyring) 2025-05-29 01:08:38.853537 | orchestrator | changed: [testbed-node-5] => (item=ceph.client.cinder-backup.keyring) 2025-05-29 01:08:38.853545 | orchestrator | 2025-05-29 01:08:38.853553 | orchestrator | TASK [cinder : Ensuring config directory has correct owner and permission] ***** 2025-05-29 01:08:38.853562 | orchestrator | Thursday 29 May 2025 01:06:23 +0000 (0:00:03.264) 0:00:51.869 ********** 2025-05-29 01:08:38.853570 | orchestrator | ok: [testbed-node-3] => (item=cinder-volume) 2025-05-29 01:08:38.853579 | orchestrator | ok: [testbed-node-4] => (item=cinder-volume) 2025-05-29 01:08:38.853587 | orchestrator | ok: [testbed-node-3] => (item=cinder-backup) 2025-05-29 01:08:38.853596 | orchestrator | ok: [testbed-node-5] => (item=cinder-volume) 2025-05-29 01:08:38.853605 | orchestrator | ok: [testbed-node-4] => (item=cinder-backup) 2025-05-29 01:08:38.853614 | orchestrator | ok: [testbed-node-5] => (item=cinder-backup) 2025-05-29 01:08:38.853623 | orchestrator | 2025-05-29 01:08:38.853632 | orchestrator | TASK [cinder : Check if policies shall be overwritten] ************************* 2025-05-29 01:08:38.853641 | orchestrator | Thursday 29 May 2025 01:06:24 +0000 (0:00:01.362) 0:00:53.231 ********** 2025-05-29 01:08:38.853650 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:08:38.853658 | orchestrator | 2025-05-29 01:08:38.853667 | orchestrator | TASK [cinder : Set cinder policy file] ***************************************** 2025-05-29 01:08:38.853681 | orchestrator | Thursday 29 May 2025 01:06:25 +0000 (0:00:00.205) 0:00:53.437 ********** 2025-05-29 01:08:38.853690 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:08:38.853700 | orchestrator | skipping: [testbed-node-1] 2025-05-29 01:08:38.853709 | orchestrator | skipping: [testbed-node-2] 2025-05-29 01:08:38.853718 | orchestrator | skipping: [testbed-node-3] 2025-05-29 01:08:38.853727 | orchestrator | skipping: [testbed-node-4] 2025-05-29 01:08:38.853736 | orchestrator | skipping: [testbed-node-5] 2025-05-29 01:08:38.853746 | orchestrator | 2025-05-29 01:08:38.853755 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-05-29 01:08:38.853763 | orchestrator | Thursday 29 May 2025 01:06:25 +0000 (0:00:00.873) 0:00:54.310 ********** 2025-05-29 01:08:38.853773 | orchestrator | included: /ansible/roles/cinder/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2, testbed-node-3, testbed-node-4, testbed-node-5 2025-05-29 01:08:38.853782 | orchestrator | 2025-05-29 01:08:38.853790 | orchestrator | TASK [service-cert-copy : cinder | Copying over extra CA certificates] ********* 2025-05-29 01:08:38.853798 | orchestrator | Thursday 29 May 2025 01:06:27 +0000 (0:00:01.309) 0:00:55.619 ********** 2025-05-29 01:08:38.853813 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-29 01:08:38.853854 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-29 01:08:38.853864 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-29 01:08:38.853878 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-29 01:08:38.853887 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-29 01:08:38.853903 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-29 01:08:38.853934 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-29 01:08:38.853945 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-29 01:08:38.853954 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-29 01:08:38.853967 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-29 01:08:38.853976 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-29 01:08:38.853990 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-29 01:08:38.853998 | orchestrator | 2025-05-29 01:08:38.854007 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS certificate] *** 2025-05-29 01:08:38.854041 | orchestrator | Thursday 29 May 2025 01:06:30 +0000 (0:00:02.922) 0:00:58.542 ********** 2025-05-29 01:08:38.854077 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-29 01:08:38.854086 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-29 01:08:38.854099 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-29 01:08:38.854114 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-29 01:08:38.854123 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:08:38.854131 | orchestrator | skipping: [testbed-node-1] 2025-05-29 01:08:38.854139 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-29 01:08:38.854171 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-29 01:08:38.854181 | orchestrator | skipping: [testbed-node-2] 2025-05-29 01:08:38.854190 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-29 01:08:38.854199 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-29 01:08:38.854217 | orchestrator | skipping: [testbed-node-3] 2025-05-29 01:08:38.854226 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-29 01:08:38.854235 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-29 01:08:38.854243 | orchestrator | skipping: [testbed-node-4] 2025-05-29 01:08:38.854275 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-29 01:08:38.854285 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-29 01:08:38.854293 | orchestrator | skipping: [testbed-node-5] 2025-05-29 01:08:38.854301 | orchestrator | 2025-05-29 01:08:38.854309 | orchestrator | TASK [service-cert-copy : cinder | Copying over backend internal TLS key] ****** 2025-05-29 01:08:38.854317 | orchestrator | Thursday 29 May 2025 01:06:32 +0000 (0:00:02.241) 0:01:00.784 ********** 2025-05-29 01:08:38.854334 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-29 01:08:38.854349 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-29 01:08:38.854357 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-29 01:08:38.854391 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-29 01:08:38.854400 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:08:38.854409 | orchestrator | skipping: [testbed-node-1] 2025-05-29 01:08:38.854417 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-29 01:08:38.854426 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-29 01:08:38.854440 | orchestrator | skipping: [testbed-node-3] 2025-05-29 01:08:38.854452 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-29 01:08:38.854461 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-29 01:08:38.854522 | orchestrator | skipping: [testbed-node-2] 2025-05-29 01:08:38.854557 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-29 01:08:38.854567 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-29 01:08:38.854575 | orchestrator | skipping: [testbed-node-4] 2025-05-29 01:08:38.854584 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-29 01:08:38.854603 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-29 01:08:38.854612 | orchestrator | skipping: [testbed-node-5] 2025-05-29 01:08:38.854620 | orchestrator | 2025-05-29 01:08:38.854629 | orchestrator | TASK [cinder : Copying over config.json files for services] ******************** 2025-05-29 01:08:38.854639 | orchestrator | Thursday 29 May 2025 01:06:34 +0000 (0:00:02.229) 0:01:03.013 ********** 2025-05-29 01:08:38.854648 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-29 01:08:38.854679 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-29 01:08:38.854688 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-29 01:08:38.854702 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-29 01:08:38.854713 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-29 01:08:38.854722 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-29 01:08:38.854749 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-29 01:08:38.854758 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-29 01:08:38.854772 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-29 01:08:38.854784 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-29 01:08:38.854793 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-29 01:08:38.854821 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-29 01:08:38.854830 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-29 01:08:38.854843 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-29 01:08:38.854854 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-29 01:08:38.854862 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-29 01:08:38.854871 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-29 01:08:38.854900 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-29 01:08:38.854915 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-29 01:08:38.854927 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-29 01:08:38.854936 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-29 01:08:38.854944 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-29 01:08:38.854975 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-29 01:08:38.854985 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-29 01:08:38.855000 | orchestrator | 2025-05-29 01:08:38.855009 | orchestrator | TASK [cinder : Copying over cinder-wsgi.conf] ********************************** 2025-05-29 01:08:38.855019 | orchestrator | Thursday 29 May 2025 01:06:37 +0000 (0:00:03.155) 0:01:06.169 ********** 2025-05-29 01:08:38.855027 | orchestrator | skipping: [testbed-node-3] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-05-29 01:08:38.855036 | orchestrator | skipping: [testbed-node-3] 2025-05-29 01:08:38.855044 | orchestrator | skipping: [testbed-node-5] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-05-29 01:08:38.855052 | orchestrator | skipping: [testbed-node-5] 2025-05-29 01:08:38.855061 | orchestrator | skipping: [testbed-node-4] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2)  2025-05-29 01:08:38.855069 | orchestrator | skipping: [testbed-node-4] 2025-05-29 01:08:38.855077 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-05-29 01:08:38.855085 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-05-29 01:08:38.855094 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/cinder/templates/cinder-wsgi.conf.j2) 2025-05-29 01:08:38.855102 | orchestrator | 2025-05-29 01:08:38.855111 | orchestrator | TASK [cinder : Copying over cinder.conf] *************************************** 2025-05-29 01:08:38.855119 | orchestrator | Thursday 29 May 2025 01:06:41 +0000 (0:00:03.347) 0:01:09.517 ********** 2025-05-29 01:08:38.855135 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-29 01:08:38.855144 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-29 01:08:38.855179 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-29 01:08:38.855195 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-29 01:08:38.855205 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-29 01:08:38.855217 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-29 01:08:38.855226 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-29 01:08:38.855240 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-29 01:08:38.855254 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-29 01:08:38.855262 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-29 01:08:38.855274 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-29 01:08:38.855283 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-29 01:08:38.855298 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-29 01:08:38.855312 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-29 01:08:38.855321 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-29 01:08:38.855333 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-29 01:08:38.855342 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-29 01:08:38.855350 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-29 01:08:38.855369 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-29 01:08:38.855378 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-29 01:08:38.855386 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-29 01:08:38.855399 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-29 01:08:38.855408 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-29 01:08:38.855425 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-29 01:08:38.855434 | orchestrator | 2025-05-29 01:08:38.855442 | orchestrator | TASK [cinder : Generating 'hostnqn' file for cinder_volume] ******************** 2025-05-29 01:08:38.855450 | orchestrator | Thursday 29 May 2025 01:06:54 +0000 (0:00:12.912) 0:01:22.429 ********** 2025-05-29 01:08:38.855458 | orchestrator | skipping: [testbed-node-1] 2025-05-29 01:08:38.855489 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:08:38.855498 | orchestrator | skipping: [testbed-node-2] 2025-05-29 01:08:38.855506 | orchestrator | changed: [testbed-node-4] 2025-05-29 01:08:38.855514 | orchestrator | changed: [testbed-node-5] 2025-05-29 01:08:38.855522 | orchestrator | changed: [testbed-node-3] 2025-05-29 01:08:38.855531 | orchestrator | 2025-05-29 01:08:38.855539 | orchestrator | TASK [cinder : Copying over existing policy file] ****************************** 2025-05-29 01:08:38.855548 | orchestrator | Thursday 29 May 2025 01:06:57 +0000 (0:00:03.076) 0:01:25.506 ********** 2025-05-29 01:08:38.855557 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-29 01:08:38.855570 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-29 01:08:38.855579 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-29 01:08:38.855594 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-29 01:08:38.855603 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:08:38.855618 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-29 01:08:38.855627 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-29 01:08:38.855635 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-29 01:08:38.855647 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-29 01:08:38.855656 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-29 01:08:38.855675 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-29 01:08:38.855683 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-29 01:08:38.855692 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-29 01:08:38.855701 | orchestrator | skipping: [testbed-node-1] 2025-05-29 01:08:38.855709 | orchestrator | skipping: [testbed-node-2] 2025-05-29 01:08:38.855721 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-29 01:08:38.855738 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-29 01:08:38.855751 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-29 01:08:38.855759 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-29 01:08:38.855767 | orchestrator | skipping: [testbed-node-3] 2025-05-29 01:08:38.855775 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-29 01:08:38.855788 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-29 01:08:38.855796 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-29 01:08:38.855811 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-29 01:08:38.855819 | orchestrator | skipping: [testbed-node-4] 2025-05-29 01:08:38.855832 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-29 01:08:38.855841 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-29 01:08:38.855853 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-29 01:08:38.855867 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-29 01:08:38.855875 | orchestrator | skipping: [testbed-node-5] 2025-05-29 01:08:38.855883 | orchestrator | 2025-05-29 01:08:38.855891 | orchestrator | TASK [cinder : Copying over nfs_shares files for cinder_volume] **************** 2025-05-29 01:08:38.855899 | orchestrator | Thursday 29 May 2025 01:07:00 +0000 (0:00:02.922) 0:01:28.428 ********** 2025-05-29 01:08:38.855907 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:08:38.855916 | orchestrator | skipping: [testbed-node-1] 2025-05-29 01:08:38.855925 | orchestrator | skipping: [testbed-node-2] 2025-05-29 01:08:38.855933 | orchestrator | skipping: [testbed-node-3] 2025-05-29 01:08:38.855940 | orchestrator | skipping: [testbed-node-4] 2025-05-29 01:08:38.855947 | orchestrator | skipping: [testbed-node-5] 2025-05-29 01:08:38.855954 | orchestrator | 2025-05-29 01:08:38.855962 | orchestrator | TASK [cinder : Check cinder containers] **************************************** 2025-05-29 01:08:38.855969 | orchestrator | Thursday 29 May 2025 01:07:01 +0000 (0:00:01.429) 0:01:29.857 ********** 2025-05-29 01:08:38.855983 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-29 01:08:38.855992 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-29 01:08:38.856002 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-29 01:08:38.856022 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-29 01:08:38.856031 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}}) 2025-05-29 01:08:38.856046 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-29 01:08:38.856055 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-29 01:08:38.856064 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-api', 'value': {'container_name': 'cinder_api', 'group': 'cinder-api', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-api:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:8776'], 'timeout': '30'}, 'haproxy': {'cinder_api': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}, 'cinder_api_external': {'enabled': 'yes', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8776', 'listen_port': '8776', 'tls_backend': 'no'}}}})  2025-05-29 01:08:38.856082 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}})  2025-05-29 01:08:38.856091 | orchestrator | changed: [testbed-node-1] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-29 01:08:38.856101 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-29 01:08:38.856115 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-29 01:08:38.856124 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-29 01:08:38.856140 | orchestrator | changed: [testbed-node-0] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-29 01:08:38.856149 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-29 01:08:38.856157 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-29 01:08:38.856170 | orchestrator | changed: [testbed-node-2] => (item={'key': 'cinder-scheduler', 'value': {'container_name': 'cinder_scheduler', 'group': 'cinder-scheduler', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-scheduler:24.2.1.20241206', 'volumes': ['/etc/kolla/cinder-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-scheduler 5672'], 'timeout': '30'}}}) 2025-05-29 01:08:38.856179 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-29 01:08:38.856187 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}})  2025-05-29 01:08:38.856204 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}})  2025-05-29 01:08:38.856212 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-volume', 'value': {'container_name': 'cinder_volume', 'group': 'cinder-volume', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-volume:24.2.1.20241206', 'privileged': True, 'ipc_mode': 'host', 'tmpfs': [''], 'volumes': ['/etc/kolla/cinder-volume/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', '', 'kolla_logs:/var/log/kolla/', '', '/opt/cinder-driver-dm-clone:/var/lib/kolla/venv/lib/python3.10/site-packages/cinder-driver-dm-clone'], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-volume 5672'], 'timeout': '30'}}}) 2025-05-29 01:08:38.856224 | orchestrator | changed: [testbed-node-3] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-29 01:08:38.856233 | orchestrator | changed: [testbed-node-5] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-29 01:08:38.856242 | orchestrator | changed: [testbed-node-4] => (item={'key': 'cinder-backup', 'value': {'container_name': 'cinder_backup', 'group': 'cinder-backup', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/cinder-backup:24.2.1.20241206', 'privileged': True, 'volumes': ['/etc/kolla/cinder-backup/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/dev/:/dev/', '/lib/modules:/lib/modules:ro', '/run:/run:shared', 'cinder:/var/lib/cinder', 'iscsi_info:/etc/iscsi', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port cinder-backup 5672'], 'timeout': '30'}}}) 2025-05-29 01:08:38.856255 | orchestrator | 2025-05-29 01:08:38.856264 | orchestrator | TASK [cinder : include_tasks] ************************************************** 2025-05-29 01:08:38.856272 | orchestrator | Thursday 29 May 2025 01:07:05 +0000 (0:00:04.328) 0:01:34.186 ********** 2025-05-29 01:08:38.856280 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:08:38.856288 | orchestrator | skipping: [testbed-node-1] 2025-05-29 01:08:38.856296 | orchestrator | skipping: [testbed-node-2] 2025-05-29 01:08:38.856304 | orchestrator | skipping: [testbed-node-3] 2025-05-29 01:08:38.856313 | orchestrator | skipping: [testbed-node-4] 2025-05-29 01:08:38.856321 | orchestrator | skipping: [testbed-node-5] 2025-05-29 01:08:38.856329 | orchestrator | 2025-05-29 01:08:38.856341 | orchestrator | TASK [cinder : Creating Cinder database] *************************************** 2025-05-29 01:08:38.856349 | orchestrator | Thursday 29 May 2025 01:07:06 +0000 (0:00:00.919) 0:01:35.105 ********** 2025-05-29 01:08:38.856358 | orchestrator | changed: [testbed-node-0] 2025-05-29 01:08:38.856366 | orchestrator | 2025-05-29 01:08:38.856374 | orchestrator | TASK [cinder : Creating Cinder database user and setting permissions] ********** 2025-05-29 01:08:38.856382 | orchestrator | Thursday 29 May 2025 01:07:09 +0000 (0:00:02.688) 0:01:37.793 ********** 2025-05-29 01:08:38.856390 | orchestrator | changed: [testbed-node-0] 2025-05-29 01:08:38.856398 | orchestrator | 2025-05-29 01:08:38.856406 | orchestrator | TASK [cinder : Running Cinder bootstrap container] ***************************** 2025-05-29 01:08:38.856415 | orchestrator | Thursday 29 May 2025 01:07:12 +0000 (0:00:02.711) 0:01:40.505 ********** 2025-05-29 01:08:38.856423 | orchestrator | changed: [testbed-node-0] 2025-05-29 01:08:38.856430 | orchestrator | 2025-05-29 01:08:38.856439 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-05-29 01:08:38.856448 | orchestrator | Thursday 29 May 2025 01:07:28 +0000 (0:00:16.585) 0:01:57.091 ********** 2025-05-29 01:08:38.856457 | orchestrator | 2025-05-29 01:08:38.856487 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-05-29 01:08:38.856496 | orchestrator | Thursday 29 May 2025 01:07:28 +0000 (0:00:00.050) 0:01:57.141 ********** 2025-05-29 01:08:38.856503 | orchestrator | 2025-05-29 01:08:38.856512 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-05-29 01:08:38.856520 | orchestrator | Thursday 29 May 2025 01:07:28 +0000 (0:00:00.170) 0:01:57.311 ********** 2025-05-29 01:08:38.856597 | orchestrator | 2025-05-29 01:08:38.856609 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-05-29 01:08:38.856617 | orchestrator | Thursday 29 May 2025 01:07:28 +0000 (0:00:00.043) 0:01:57.355 ********** 2025-05-29 01:08:38.856625 | orchestrator | 2025-05-29 01:08:38.856633 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-05-29 01:08:38.856641 | orchestrator | Thursday 29 May 2025 01:07:28 +0000 (0:00:00.041) 0:01:57.396 ********** 2025-05-29 01:08:38.856649 | orchestrator | 2025-05-29 01:08:38.856658 | orchestrator | TASK [cinder : Flush handlers] ************************************************* 2025-05-29 01:08:38.856666 | orchestrator | Thursday 29 May 2025 01:07:29 +0000 (0:00:00.044) 0:01:57.441 ********** 2025-05-29 01:08:38.856674 | orchestrator | 2025-05-29 01:08:38.856682 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-api container] ************************ 2025-05-29 01:08:38.856690 | orchestrator | Thursday 29 May 2025 01:07:29 +0000 (0:00:00.179) 0:01:57.620 ********** 2025-05-29 01:08:38.856698 | orchestrator | changed: [testbed-node-0] 2025-05-29 01:08:38.856706 | orchestrator | changed: [testbed-node-1] 2025-05-29 01:08:38.856715 | orchestrator | changed: [testbed-node-2] 2025-05-29 01:08:38.856722 | orchestrator | 2025-05-29 01:08:38.856730 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-scheduler container] ****************** 2025-05-29 01:08:38.856751 | orchestrator | Thursday 29 May 2025 01:07:48 +0000 (0:00:19.235) 0:02:16.855 ********** 2025-05-29 01:08:38.856760 | orchestrator | changed: [testbed-node-0] 2025-05-29 01:08:38.856777 | orchestrator | changed: [testbed-node-1] 2025-05-29 01:08:38.856785 | orchestrator | changed: [testbed-node-2] 2025-05-29 01:08:38.856793 | orchestrator | 2025-05-29 01:08:38.856800 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-volume container] ********************* 2025-05-29 01:08:38.856809 | orchestrator | Thursday 29 May 2025 01:08:01 +0000 (0:00:12.669) 0:02:29.525 ********** 2025-05-29 01:08:38.856817 | orchestrator | changed: [testbed-node-4] 2025-05-29 01:08:38.856826 | orchestrator | changed: [testbed-node-3] 2025-05-29 01:08:38.856834 | orchestrator | changed: [testbed-node-5] 2025-05-29 01:08:38.856843 | orchestrator | 2025-05-29 01:08:38.856852 | orchestrator | RUNNING HANDLER [cinder : Restart cinder-backup container] ********************* 2025-05-29 01:08:38.856861 | orchestrator | Thursday 29 May 2025 01:08:25 +0000 (0:00:24.000) 0:02:53.525 ********** 2025-05-29 01:08:38.856870 | orchestrator | changed: [testbed-node-5] 2025-05-29 01:08:38.856878 | orchestrator | changed: [testbed-node-4] 2025-05-29 01:08:38.856886 | orchestrator | changed: [testbed-node-3] 2025-05-29 01:08:38.856894 | orchestrator | 2025-05-29 01:08:38.856902 | orchestrator | RUNNING HANDLER [cinder : Wait for cinder services to update service versions] *** 2025-05-29 01:08:38.856910 | orchestrator | Thursday 29 May 2025 01:08:36 +0000 (0:00:11.621) 0:03:05.146 ********** 2025-05-29 01:08:38.856918 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:08:38.856925 | orchestrator | 2025-05-29 01:08:38.856933 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-29 01:08:38.856941 | orchestrator | testbed-node-0 : ok=21  changed=15  unreachable=0 failed=0 skipped=10  rescued=0 ignored=0 2025-05-29 01:08:38.856950 | orchestrator | testbed-node-1 : ok=12  changed=8  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-05-29 01:08:38.856957 | orchestrator | testbed-node-2 : ok=12  changed=8  unreachable=0 failed=0 skipped=8  rescued=0 ignored=0 2025-05-29 01:08:38.856964 | orchestrator | testbed-node-3 : ok=18  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-05-29 01:08:38.856972 | orchestrator | testbed-node-4 : ok=18  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-05-29 01:08:38.856979 | orchestrator | testbed-node-5 : ok=18  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-05-29 01:08:38.856986 | orchestrator | 2025-05-29 01:08:38.856993 | orchestrator | 2025-05-29 01:08:38.857000 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-29 01:08:38.857008 | orchestrator | Thursday 29 May 2025 01:08:37 +0000 (0:00:00.609) 0:03:05.756 ********** 2025-05-29 01:08:38.857021 | orchestrator | =============================================================================== 2025-05-29 01:08:38.857029 | orchestrator | cinder : Restart cinder-volume container ------------------------------- 24.00s 2025-05-29 01:08:38.857036 | orchestrator | cinder : Restart cinder-api container ---------------------------------- 19.24s 2025-05-29 01:08:38.857044 | orchestrator | cinder : Running Cinder bootstrap container ---------------------------- 16.59s 2025-05-29 01:08:38.857052 | orchestrator | cinder : Copying over cinder.conf -------------------------------------- 12.91s 2025-05-29 01:08:38.857059 | orchestrator | cinder : Restart cinder-scheduler container ---------------------------- 12.67s 2025-05-29 01:08:38.857067 | orchestrator | cinder : Restart cinder-backup container ------------------------------- 11.62s 2025-05-29 01:08:38.857074 | orchestrator | service-ks-register : cinder | Granting user roles ---------------------- 8.13s 2025-05-29 01:08:38.857082 | orchestrator | service-ks-register : cinder | Creating endpoints ----------------------- 6.82s 2025-05-29 01:08:38.857095 | orchestrator | cinder : Copying over multiple ceph.conf for cinder services ------------ 6.11s 2025-05-29 01:08:38.857103 | orchestrator | cinder : Ensuring cinder service ceph config subdirs exists ------------- 4.83s 2025-05-29 01:08:38.857111 | orchestrator | cinder : Check cinder containers ---------------------------------------- 4.33s 2025-05-29 01:08:38.857119 | orchestrator | service-ks-register : cinder | Creating users --------------------------- 3.84s 2025-05-29 01:08:38.857126 | orchestrator | service-ks-register : cinder | Creating projects ------------------------ 3.36s 2025-05-29 01:08:38.857134 | orchestrator | cinder : Copying over cinder-wsgi.conf ---------------------------------- 3.35s 2025-05-29 01:08:38.857142 | orchestrator | cinder : Copy over Ceph keyring files for cinder-backup ----------------- 3.26s 2025-05-29 01:08:38.857150 | orchestrator | service-ks-register : cinder | Creating roles --------------------------- 3.20s 2025-05-29 01:08:38.857158 | orchestrator | service-ks-register : cinder | Creating services ------------------------ 3.19s 2025-05-29 01:08:38.857166 | orchestrator | cinder : Copying over config.json files for services -------------------- 3.16s 2025-05-29 01:08:38.857174 | orchestrator | cinder : Generating 'hostnqn' file for cinder_volume -------------------- 3.08s 2025-05-29 01:08:38.857181 | orchestrator | cinder : Copy over Ceph keyring files for cinder-volume ----------------- 3.01s 2025-05-29 01:08:38.857189 | orchestrator | 2025-05-29 01:08:38 | INFO  | Task 977ef4a5-1708-4276-b51b-8d515656c2d7 is in state STARTED 2025-05-29 01:08:38.857197 | orchestrator | 2025-05-29 01:08:38 | INFO  | Task 87cb6d19-a793-4f16-8095-99551423e085 is in state STARTED 2025-05-29 01:08:38.857205 | orchestrator | 2025-05-29 01:08:38 | INFO  | Task 805e288f-1c93-45ff-b9d4-966879aaf853 is in state STARTED 2025-05-29 01:08:38.857219 | orchestrator | 2025-05-29 01:08:38 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:08:38.857227 | orchestrator | 2025-05-29 01:08:38 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:08:41.909236 | orchestrator | 2025-05-29 01:08:41 | INFO  | Task c9a2808c-a7e5-4b31-bdd4-1912afbb585a is in state STARTED 2025-05-29 01:08:41.909403 | orchestrator | 2025-05-29 01:08:41 | INFO  | Task 977ef4a5-1708-4276-b51b-8d515656c2d7 is in state STARTED 2025-05-29 01:08:41.910982 | orchestrator | 2025-05-29 01:08:41 | INFO  | Task 87cb6d19-a793-4f16-8095-99551423e085 is in state STARTED 2025-05-29 01:08:41.912618 | orchestrator | 2025-05-29 01:08:41 | INFO  | Task 805e288f-1c93-45ff-b9d4-966879aaf853 is in state STARTED 2025-05-29 01:08:41.914288 | orchestrator | 2025-05-29 01:08:41 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:08:41.914561 | orchestrator | 2025-05-29 01:08:41 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:08:44.963143 | orchestrator | 2025-05-29 01:08:44 | INFO  | Task c9a2808c-a7e5-4b31-bdd4-1912afbb585a is in state STARTED 2025-05-29 01:08:44.965094 | orchestrator | 2025-05-29 01:08:44 | INFO  | Task 977ef4a5-1708-4276-b51b-8d515656c2d7 is in state STARTED 2025-05-29 01:08:44.966121 | orchestrator | 2025-05-29 01:08:44 | INFO  | Task 87cb6d19-a793-4f16-8095-99551423e085 is in state STARTED 2025-05-29 01:08:44.968590 | orchestrator | 2025-05-29 01:08:44 | INFO  | Task 805e288f-1c93-45ff-b9d4-966879aaf853 is in state STARTED 2025-05-29 01:08:44.969433 | orchestrator | 2025-05-29 01:08:44 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:08:44.969459 | orchestrator | 2025-05-29 01:08:44 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:08:48.008598 | orchestrator | 2025-05-29 01:08:48 | INFO  | Task c9a2808c-a7e5-4b31-bdd4-1912afbb585a is in state STARTED 2025-05-29 01:08:48.010397 | orchestrator | 2025-05-29 01:08:48 | INFO  | Task 977ef4a5-1708-4276-b51b-8d515656c2d7 is in state STARTED 2025-05-29 01:08:48.012716 | orchestrator | 2025-05-29 01:08:48 | INFO  | Task 87cb6d19-a793-4f16-8095-99551423e085 is in state STARTED 2025-05-29 01:08:48.014151 | orchestrator | 2025-05-29 01:08:48 | INFO  | Task 805e288f-1c93-45ff-b9d4-966879aaf853 is in state STARTED 2025-05-29 01:08:48.015754 | orchestrator | 2025-05-29 01:08:48 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:08:48.015777 | orchestrator | 2025-05-29 01:08:48 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:08:51.071705 | orchestrator | 2025-05-29 01:08:51 | INFO  | Task c9a2808c-a7e5-4b31-bdd4-1912afbb585a is in state STARTED 2025-05-29 01:08:51.073798 | orchestrator | 2025-05-29 01:08:51 | INFO  | Task 977ef4a5-1708-4276-b51b-8d515656c2d7 is in state STARTED 2025-05-29 01:08:51.075160 | orchestrator | 2025-05-29 01:08:51 | INFO  | Task 87cb6d19-a793-4f16-8095-99551423e085 is in state STARTED 2025-05-29 01:08:51.077488 | orchestrator | 2025-05-29 01:08:51 | INFO  | Task 805e288f-1c93-45ff-b9d4-966879aaf853 is in state STARTED 2025-05-29 01:08:51.078699 | orchestrator | 2025-05-29 01:08:51 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:08:51.078746 | orchestrator | 2025-05-29 01:08:51 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:08:54.123343 | orchestrator | 2025-05-29 01:08:54 | INFO  | Task c9a2808c-a7e5-4b31-bdd4-1912afbb585a is in state STARTED 2025-05-29 01:08:54.124090 | orchestrator | 2025-05-29 01:08:54 | INFO  | Task 977ef4a5-1708-4276-b51b-8d515656c2d7 is in state STARTED 2025-05-29 01:08:54.127161 | orchestrator | 2025-05-29 01:08:54 | INFO  | Task 87cb6d19-a793-4f16-8095-99551423e085 is in state STARTED 2025-05-29 01:08:54.127211 | orchestrator | 2025-05-29 01:08:54 | INFO  | Task 805e288f-1c93-45ff-b9d4-966879aaf853 is in state STARTED 2025-05-29 01:08:54.128812 | orchestrator | 2025-05-29 01:08:54 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:08:54.128834 | orchestrator | 2025-05-29 01:08:54 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:08:57.192141 | orchestrator | 2025-05-29 01:08:57 | INFO  | Task c9a2808c-a7e5-4b31-bdd4-1912afbb585a is in state STARTED 2025-05-29 01:08:57.192242 | orchestrator | 2025-05-29 01:08:57 | INFO  | Task 977ef4a5-1708-4276-b51b-8d515656c2d7 is in state STARTED 2025-05-29 01:08:57.192259 | orchestrator | 2025-05-29 01:08:57 | INFO  | Task 87cb6d19-a793-4f16-8095-99551423e085 is in state STARTED 2025-05-29 01:08:57.192271 | orchestrator | 2025-05-29 01:08:57 | INFO  | Task 805e288f-1c93-45ff-b9d4-966879aaf853 is in state STARTED 2025-05-29 01:08:57.192282 | orchestrator | 2025-05-29 01:08:57 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:08:57.192294 | orchestrator | 2025-05-29 01:08:57 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:09:00.247811 | orchestrator | 2025-05-29 01:09:00 | INFO  | Task c9a2808c-a7e5-4b31-bdd4-1912afbb585a is in state STARTED 2025-05-29 01:09:00.250067 | orchestrator | 2025-05-29 01:09:00 | INFO  | Task 977ef4a5-1708-4276-b51b-8d515656c2d7 is in state STARTED 2025-05-29 01:09:00.253001 | orchestrator | 2025-05-29 01:09:00 | INFO  | Task 87cb6d19-a793-4f16-8095-99551423e085 is in state STARTED 2025-05-29 01:09:00.256774 | orchestrator | 2025-05-29 01:09:00 | INFO  | Task 805e288f-1c93-45ff-b9d4-966879aaf853 is in state STARTED 2025-05-29 01:09:00.258594 | orchestrator | 2025-05-29 01:09:00 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:09:00.259264 | orchestrator | 2025-05-29 01:09:00 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:09:03.305888 | orchestrator | 2025-05-29 01:09:03 | INFO  | Task c9a2808c-a7e5-4b31-bdd4-1912afbb585a is in state STARTED 2025-05-29 01:09:03.308023 | orchestrator | 2025-05-29 01:09:03 | INFO  | Task 977ef4a5-1708-4276-b51b-8d515656c2d7 is in state STARTED 2025-05-29 01:09:03.308987 | orchestrator | 2025-05-29 01:09:03 | INFO  | Task 87cb6d19-a793-4f16-8095-99551423e085 is in state STARTED 2025-05-29 01:09:03.309912 | orchestrator | 2025-05-29 01:09:03 | INFO  | Task 805e288f-1c93-45ff-b9d4-966879aaf853 is in state STARTED 2025-05-29 01:09:03.311261 | orchestrator | 2025-05-29 01:09:03 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:09:03.311404 | orchestrator | 2025-05-29 01:09:03 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:09:06.356536 | orchestrator | 2025-05-29 01:09:06 | INFO  | Task c9a2808c-a7e5-4b31-bdd4-1912afbb585a is in state STARTED 2025-05-29 01:09:06.357767 | orchestrator | 2025-05-29 01:09:06 | INFO  | Task 977ef4a5-1708-4276-b51b-8d515656c2d7 is in state STARTED 2025-05-29 01:09:06.358891 | orchestrator | 2025-05-29 01:09:06 | INFO  | Task 87cb6d19-a793-4f16-8095-99551423e085 is in state STARTED 2025-05-29 01:09:06.362360 | orchestrator | 2025-05-29 01:09:06 | INFO  | Task 805e288f-1c93-45ff-b9d4-966879aaf853 is in state STARTED 2025-05-29 01:09:06.364937 | orchestrator | 2025-05-29 01:09:06 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:09:06.364975 | orchestrator | 2025-05-29 01:09:06 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:09:09.422313 | orchestrator | 2025-05-29 01:09:09 | INFO  | Task c9a2808c-a7e5-4b31-bdd4-1912afbb585a is in state STARTED 2025-05-29 01:09:09.424126 | orchestrator | 2025-05-29 01:09:09 | INFO  | Task 977ef4a5-1708-4276-b51b-8d515656c2d7 is in state STARTED 2025-05-29 01:09:09.426117 | orchestrator | 2025-05-29 01:09:09 | INFO  | Task 87cb6d19-a793-4f16-8095-99551423e085 is in state STARTED 2025-05-29 01:09:09.427257 | orchestrator | 2025-05-29 01:09:09 | INFO  | Task 805e288f-1c93-45ff-b9d4-966879aaf853 is in state STARTED 2025-05-29 01:09:09.428888 | orchestrator | 2025-05-29 01:09:09 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:09:09.429133 | orchestrator | 2025-05-29 01:09:09 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:09:12.479167 | orchestrator | 2025-05-29 01:09:12 | INFO  | Task c9a2808c-a7e5-4b31-bdd4-1912afbb585a is in state STARTED 2025-05-29 01:09:12.480046 | orchestrator | 2025-05-29 01:09:12 | INFO  | Task 977ef4a5-1708-4276-b51b-8d515656c2d7 is in state STARTED 2025-05-29 01:09:12.481284 | orchestrator | 2025-05-29 01:09:12 | INFO  | Task 87cb6d19-a793-4f16-8095-99551423e085 is in state STARTED 2025-05-29 01:09:12.482224 | orchestrator | 2025-05-29 01:09:12 | INFO  | Task 805e288f-1c93-45ff-b9d4-966879aaf853 is in state STARTED 2025-05-29 01:09:12.483816 | orchestrator | 2025-05-29 01:09:12 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:09:12.483854 | orchestrator | 2025-05-29 01:09:12 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:09:15.542224 | orchestrator | 2025-05-29 01:09:15 | INFO  | Task c9a2808c-a7e5-4b31-bdd4-1912afbb585a is in state STARTED 2025-05-29 01:09:15.544255 | orchestrator | 2025-05-29 01:09:15 | INFO  | Task 977ef4a5-1708-4276-b51b-8d515656c2d7 is in state STARTED 2025-05-29 01:09:15.545117 | orchestrator | 2025-05-29 01:09:15 | INFO  | Task 87cb6d19-a793-4f16-8095-99551423e085 is in state STARTED 2025-05-29 01:09:15.546271 | orchestrator | 2025-05-29 01:09:15 | INFO  | Task 805e288f-1c93-45ff-b9d4-966879aaf853 is in state STARTED 2025-05-29 01:09:15.547605 | orchestrator | 2025-05-29 01:09:15 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:09:15.547851 | orchestrator | 2025-05-29 01:09:15 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:09:18.600030 | orchestrator | 2025-05-29 01:09:18 | INFO  | Task c9a2808c-a7e5-4b31-bdd4-1912afbb585a is in state STARTED 2025-05-29 01:09:18.601193 | orchestrator | 2025-05-29 01:09:18 | INFO  | Task 977ef4a5-1708-4276-b51b-8d515656c2d7 is in state STARTED 2025-05-29 01:09:18.604686 | orchestrator | 2025-05-29 01:09:18 | INFO  | Task 87cb6d19-a793-4f16-8095-99551423e085 is in state STARTED 2025-05-29 01:09:18.607153 | orchestrator | 2025-05-29 01:09:18 | INFO  | Task 805e288f-1c93-45ff-b9d4-966879aaf853 is in state STARTED 2025-05-29 01:09:18.609101 | orchestrator | 2025-05-29 01:09:18 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:09:18.609121 | orchestrator | 2025-05-29 01:09:18 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:09:21.659510 | orchestrator | 2025-05-29 01:09:21 | INFO  | Task c9a2808c-a7e5-4b31-bdd4-1912afbb585a is in state STARTED 2025-05-29 01:09:21.662753 | orchestrator | 2025-05-29 01:09:21 | INFO  | Task 977ef4a5-1708-4276-b51b-8d515656c2d7 is in state STARTED 2025-05-29 01:09:21.665936 | orchestrator | 2025-05-29 01:09:21 | INFO  | Task 87cb6d19-a793-4f16-8095-99551423e085 is in state STARTED 2025-05-29 01:09:21.667882 | orchestrator | 2025-05-29 01:09:21 | INFO  | Task 805e288f-1c93-45ff-b9d4-966879aaf853 is in state STARTED 2025-05-29 01:09:21.672331 | orchestrator | 2025-05-29 01:09:21 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:09:21.672373 | orchestrator | 2025-05-29 01:09:21 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:09:24.726965 | orchestrator | 2025-05-29 01:09:24 | INFO  | Task c9a2808c-a7e5-4b31-bdd4-1912afbb585a is in state STARTED 2025-05-29 01:09:24.727094 | orchestrator | 2025-05-29 01:09:24 | INFO  | Task 977ef4a5-1708-4276-b51b-8d515656c2d7 is in state STARTED 2025-05-29 01:09:24.727137 | orchestrator | 2025-05-29 01:09:24 | INFO  | Task 87cb6d19-a793-4f16-8095-99551423e085 is in state STARTED 2025-05-29 01:09:24.727150 | orchestrator | 2025-05-29 01:09:24 | INFO  | Task 805e288f-1c93-45ff-b9d4-966879aaf853 is in state STARTED 2025-05-29 01:09:24.729418 | orchestrator | 2025-05-29 01:09:24 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:09:24.729445 | orchestrator | 2025-05-29 01:09:24 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:09:27.778847 | orchestrator | 2025-05-29 01:09:27 | INFO  | Task c9a2808c-a7e5-4b31-bdd4-1912afbb585a is in state STARTED 2025-05-29 01:09:27.778938 | orchestrator | 2025-05-29 01:09:27 | INFO  | Task 977ef4a5-1708-4276-b51b-8d515656c2d7 is in state STARTED 2025-05-29 01:09:27.778953 | orchestrator | 2025-05-29 01:09:27 | INFO  | Task 87cb6d19-a793-4f16-8095-99551423e085 is in state STARTED 2025-05-29 01:09:27.778965 | orchestrator | 2025-05-29 01:09:27 | INFO  | Task 805e288f-1c93-45ff-b9d4-966879aaf853 is in state STARTED 2025-05-29 01:09:27.779898 | orchestrator | 2025-05-29 01:09:27 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:09:27.779937 | orchestrator | 2025-05-29 01:09:27 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:09:30.841553 | orchestrator | 2025-05-29 01:09:30 | INFO  | Task c9a2808c-a7e5-4b31-bdd4-1912afbb585a is in state STARTED 2025-05-29 01:09:30.842478 | orchestrator | 2025-05-29 01:09:30 | INFO  | Task 977ef4a5-1708-4276-b51b-8d515656c2d7 is in state STARTED 2025-05-29 01:09:30.844795 | orchestrator | 2025-05-29 01:09:30 | INFO  | Task 87cb6d19-a793-4f16-8095-99551423e085 is in state STARTED 2025-05-29 01:09:30.846299 | orchestrator | 2025-05-29 01:09:30 | INFO  | Task 805e288f-1c93-45ff-b9d4-966879aaf853 is in state STARTED 2025-05-29 01:09:30.847912 | orchestrator | 2025-05-29 01:09:30 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:09:30.847937 | orchestrator | 2025-05-29 01:09:30 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:09:33.894179 | orchestrator | 2025-05-29 01:09:33 | INFO  | Task c9a2808c-a7e5-4b31-bdd4-1912afbb585a is in state STARTED 2025-05-29 01:09:33.894284 | orchestrator | 2025-05-29 01:09:33 | INFO  | Task 977ef4a5-1708-4276-b51b-8d515656c2d7 is in state STARTED 2025-05-29 01:09:33.894299 | orchestrator | 2025-05-29 01:09:33 | INFO  | Task 87cb6d19-a793-4f16-8095-99551423e085 is in state STARTED 2025-05-29 01:09:33.894311 | orchestrator | 2025-05-29 01:09:33 | INFO  | Task 805e288f-1c93-45ff-b9d4-966879aaf853 is in state STARTED 2025-05-29 01:09:33.896291 | orchestrator | 2025-05-29 01:09:33 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:09:33.896315 | orchestrator | 2025-05-29 01:09:33 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:09:36.940188 | orchestrator | 2025-05-29 01:09:36 | INFO  | Task c9a2808c-a7e5-4b31-bdd4-1912afbb585a is in state SUCCESS 2025-05-29 01:09:36.940620 | orchestrator | 2025-05-29 01:09:36 | INFO  | Task 977ef4a5-1708-4276-b51b-8d515656c2d7 is in state STARTED 2025-05-29 01:09:36.942575 | orchestrator | 2025-05-29 01:09:36 | INFO  | Task 87cb6d19-a793-4f16-8095-99551423e085 is in state STARTED 2025-05-29 01:09:36.943861 | orchestrator | 2025-05-29 01:09:36 | INFO  | Task 805e288f-1c93-45ff-b9d4-966879aaf853 is in state STARTED 2025-05-29 01:09:36.945344 | orchestrator | 2025-05-29 01:09:36 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:09:36.945428 | orchestrator | 2025-05-29 01:09:36 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:09:39.998976 | orchestrator | 2025-05-29 01:09:39 | INFO  | Task 977ef4a5-1708-4276-b51b-8d515656c2d7 is in state STARTED 2025-05-29 01:09:39.999266 | orchestrator | 2025-05-29 01:09:39 | INFO  | Task 87cb6d19-a793-4f16-8095-99551423e085 is in state STARTED 2025-05-29 01:09:39.999290 | orchestrator | 2025-05-29 01:09:39 | INFO  | Task 805e288f-1c93-45ff-b9d4-966879aaf853 is in state STARTED 2025-05-29 01:09:40.000412 | orchestrator | 2025-05-29 01:09:40 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:09:40.000452 | orchestrator | 2025-05-29 01:09:40 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:09:43.054313 | orchestrator | 2025-05-29 01:09:43 | INFO  | Task 977ef4a5-1708-4276-b51b-8d515656c2d7 is in state STARTED 2025-05-29 01:09:43.057105 | orchestrator | 2025-05-29 01:09:43 | INFO  | Task 87cb6d19-a793-4f16-8095-99551423e085 is in state STARTED 2025-05-29 01:09:43.059894 | orchestrator | 2025-05-29 01:09:43 | INFO  | Task 805e288f-1c93-45ff-b9d4-966879aaf853 is in state STARTED 2025-05-29 01:09:43.062405 | orchestrator | 2025-05-29 01:09:43 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:09:43.062435 | orchestrator | 2025-05-29 01:09:43 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:09:46.118760 | orchestrator | 2025-05-29 01:09:46 | INFO  | Task 977ef4a5-1708-4276-b51b-8d515656c2d7 is in state STARTED 2025-05-29 01:09:46.121018 | orchestrator | 2025-05-29 01:09:46 | INFO  | Task 87cb6d19-a793-4f16-8095-99551423e085 is in state STARTED 2025-05-29 01:09:46.123204 | orchestrator | 2025-05-29 01:09:46 | INFO  | Task 805e288f-1c93-45ff-b9d4-966879aaf853 is in state STARTED 2025-05-29 01:09:46.124978 | orchestrator | 2025-05-29 01:09:46 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:09:46.125247 | orchestrator | 2025-05-29 01:09:46 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:09:49.169235 | orchestrator | 2025-05-29 01:09:49 | INFO  | Task 977ef4a5-1708-4276-b51b-8d515656c2d7 is in state STARTED 2025-05-29 01:09:49.170867 | orchestrator | 2025-05-29 01:09:49 | INFO  | Task 87cb6d19-a793-4f16-8095-99551423e085 is in state STARTED 2025-05-29 01:09:49.173611 | orchestrator | 2025-05-29 01:09:49 | INFO  | Task 805e288f-1c93-45ff-b9d4-966879aaf853 is in state STARTED 2025-05-29 01:09:49.173640 | orchestrator | 2025-05-29 01:09:49 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:09:49.173652 | orchestrator | 2025-05-29 01:09:49 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:09:52.233779 | orchestrator | 2025-05-29 01:09:52 | INFO  | Task 977ef4a5-1708-4276-b51b-8d515656c2d7 is in state STARTED 2025-05-29 01:09:52.235767 | orchestrator | 2025-05-29 01:09:52 | INFO  | Task 87cb6d19-a793-4f16-8095-99551423e085 is in state STARTED 2025-05-29 01:09:52.237774 | orchestrator | 2025-05-29 01:09:52 | INFO  | Task 805e288f-1c93-45ff-b9d4-966879aaf853 is in state STARTED 2025-05-29 01:09:52.239636 | orchestrator | 2025-05-29 01:09:52 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:09:52.239663 | orchestrator | 2025-05-29 01:09:52 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:09:55.281402 | orchestrator | 2025-05-29 01:09:55 | INFO  | Task 977ef4a5-1708-4276-b51b-8d515656c2d7 is in state STARTED 2025-05-29 01:09:55.281933 | orchestrator | 2025-05-29 01:09:55 | INFO  | Task 87cb6d19-a793-4f16-8095-99551423e085 is in state STARTED 2025-05-29 01:09:55.282874 | orchestrator | 2025-05-29 01:09:55 | INFO  | Task 805e288f-1c93-45ff-b9d4-966879aaf853 is in state STARTED 2025-05-29 01:09:55.283899 | orchestrator | 2025-05-29 01:09:55 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:09:55.283926 | orchestrator | 2025-05-29 01:09:55 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:09:58.337566 | orchestrator | 2025-05-29 01:09:58 | INFO  | Task 977ef4a5-1708-4276-b51b-8d515656c2d7 is in state STARTED 2025-05-29 01:09:58.339362 | orchestrator | 2025-05-29 01:09:58 | INFO  | Task 87cb6d19-a793-4f16-8095-99551423e085 is in state STARTED 2025-05-29 01:09:58.341274 | orchestrator | 2025-05-29 01:09:58 | INFO  | Task 805e288f-1c93-45ff-b9d4-966879aaf853 is in state STARTED 2025-05-29 01:09:58.343372 | orchestrator | 2025-05-29 01:09:58 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:09:58.343432 | orchestrator | 2025-05-29 01:09:58 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:10:01.396941 | orchestrator | 2025-05-29 01:10:01 | INFO  | Task 977ef4a5-1708-4276-b51b-8d515656c2d7 is in state STARTED 2025-05-29 01:10:01.397345 | orchestrator | 2025-05-29 01:10:01 | INFO  | Task 87cb6d19-a793-4f16-8095-99551423e085 is in state STARTED 2025-05-29 01:10:01.399211 | orchestrator | 2025-05-29 01:10:01 | INFO  | Task 805e288f-1c93-45ff-b9d4-966879aaf853 is in state STARTED 2025-05-29 01:10:01.400383 | orchestrator | 2025-05-29 01:10:01 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:10:01.400423 | orchestrator | 2025-05-29 01:10:01 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:10:04.454203 | orchestrator | 2025-05-29 01:10:04 | INFO  | Task 977ef4a5-1708-4276-b51b-8d515656c2d7 is in state STARTED 2025-05-29 01:10:04.454692 | orchestrator | 2025-05-29 01:10:04 | INFO  | Task 87cb6d19-a793-4f16-8095-99551423e085 is in state STARTED 2025-05-29 01:10:04.457148 | orchestrator | 2025-05-29 01:10:04 | INFO  | Task 805e288f-1c93-45ff-b9d4-966879aaf853 is in state STARTED 2025-05-29 01:10:04.459005 | orchestrator | 2025-05-29 01:10:04 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:10:04.459023 | orchestrator | 2025-05-29 01:10:04 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:10:07.515106 | orchestrator | 2025-05-29 01:10:07 | INFO  | Task 977ef4a5-1708-4276-b51b-8d515656c2d7 is in state STARTED 2025-05-29 01:10:07.517087 | orchestrator | 2025-05-29 01:10:07 | INFO  | Task 87cb6d19-a793-4f16-8095-99551423e085 is in state STARTED 2025-05-29 01:10:07.519440 | orchestrator | 2025-05-29 01:10:07 | INFO  | Task 805e288f-1c93-45ff-b9d4-966879aaf853 is in state STARTED 2025-05-29 01:10:07.520938 | orchestrator | 2025-05-29 01:10:07 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:10:07.521084 | orchestrator | 2025-05-29 01:10:07 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:10:10.576794 | orchestrator | 2025-05-29 01:10:10 | INFO  | Task 977ef4a5-1708-4276-b51b-8d515656c2d7 is in state STARTED 2025-05-29 01:10:10.578861 | orchestrator | 2025-05-29 01:10:10 | INFO  | Task 87cb6d19-a793-4f16-8095-99551423e085 is in state STARTED 2025-05-29 01:10:10.580749 | orchestrator | 2025-05-29 01:10:10 | INFO  | Task 805e288f-1c93-45ff-b9d4-966879aaf853 is in state STARTED 2025-05-29 01:10:10.582285 | orchestrator | 2025-05-29 01:10:10 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:10:10.582360 | orchestrator | 2025-05-29 01:10:10 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:10:13.640285 | orchestrator | 2025-05-29 01:10:13 | INFO  | Task 977ef4a5-1708-4276-b51b-8d515656c2d7 is in state STARTED 2025-05-29 01:10:13.640391 | orchestrator | 2025-05-29 01:10:13 | INFO  | Task 87cb6d19-a793-4f16-8095-99551423e085 is in state STARTED 2025-05-29 01:10:13.642431 | orchestrator | 2025-05-29 01:10:13 | INFO  | Task 805e288f-1c93-45ff-b9d4-966879aaf853 is in state STARTED 2025-05-29 01:10:13.642598 | orchestrator | 2025-05-29 01:10:13 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:10:13.642618 | orchestrator | 2025-05-29 01:10:13 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:10:16.690166 | orchestrator | 2025-05-29 01:10:16 | INFO  | Task 977ef4a5-1708-4276-b51b-8d515656c2d7 is in state STARTED 2025-05-29 01:10:16.691522 | orchestrator | 2025-05-29 01:10:16 | INFO  | Task 87cb6d19-a793-4f16-8095-99551423e085 is in state STARTED 2025-05-29 01:10:16.693500 | orchestrator | 2025-05-29 01:10:16 | INFO  | Task 805e288f-1c93-45ff-b9d4-966879aaf853 is in state STARTED 2025-05-29 01:10:16.695493 | orchestrator | 2025-05-29 01:10:16 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:10:16.695520 | orchestrator | 2025-05-29 01:10:16 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:10:19.740885 | orchestrator | 2025-05-29 01:10:19 | INFO  | Task 977ef4a5-1708-4276-b51b-8d515656c2d7 is in state STARTED 2025-05-29 01:10:19.745065 | orchestrator | 2025-05-29 01:10:19 | INFO  | Task 87cb6d19-a793-4f16-8095-99551423e085 is in state STARTED 2025-05-29 01:10:19.747077 | orchestrator | 2025-05-29 01:10:19 | INFO  | Task 805e288f-1c93-45ff-b9d4-966879aaf853 is in state STARTED 2025-05-29 01:10:19.748959 | orchestrator | 2025-05-29 01:10:19 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:10:19.749020 | orchestrator | 2025-05-29 01:10:19 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:10:22.794354 | orchestrator | 2025-05-29 01:10:22 | INFO  | Task 977ef4a5-1708-4276-b51b-8d515656c2d7 is in state STARTED 2025-05-29 01:10:22.795886 | orchestrator | 2025-05-29 01:10:22 | INFO  | Task 87cb6d19-a793-4f16-8095-99551423e085 is in state STARTED 2025-05-29 01:10:22.797086 | orchestrator | 2025-05-29 01:10:22 | INFO  | Task 805e288f-1c93-45ff-b9d4-966879aaf853 is in state SUCCESS 2025-05-29 01:10:22.798563 | orchestrator | 2025-05-29 01:10:22 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:10:22.798590 | orchestrator | 2025-05-29 01:10:22 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:10:25.851105 | orchestrator | 2025-05-29 01:10:25 | INFO  | Task 977ef4a5-1708-4276-b51b-8d515656c2d7 is in state STARTED 2025-05-29 01:10:25.852857 | orchestrator | 2025-05-29 01:10:25 | INFO  | Task 87cb6d19-a793-4f16-8095-99551423e085 is in state STARTED 2025-05-29 01:10:25.855335 | orchestrator | 2025-05-29 01:10:25 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:10:25.855363 | orchestrator | 2025-05-29 01:10:25 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:10:28.904381 | orchestrator | 2025-05-29 01:10:28 | INFO  | Task 977ef4a5-1708-4276-b51b-8d515656c2d7 is in state STARTED 2025-05-29 01:10:28.904511 | orchestrator | 2025-05-29 01:10:28 | INFO  | Task 87cb6d19-a793-4f16-8095-99551423e085 is in state STARTED 2025-05-29 01:10:28.904527 | orchestrator | 2025-05-29 01:10:28 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:10:28.904539 | orchestrator | 2025-05-29 01:10:28 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:10:31.977704 | orchestrator | 2025-05-29 01:10:31 | INFO  | Task 977ef4a5-1708-4276-b51b-8d515656c2d7 is in state STARTED 2025-05-29 01:10:31.979826 | orchestrator | 2025-05-29 01:10:31 | INFO  | Task 87cb6d19-a793-4f16-8095-99551423e085 is in state STARTED 2025-05-29 01:10:31.981469 | orchestrator | 2025-05-29 01:10:31 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:10:31.981532 | orchestrator | 2025-05-29 01:10:31 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:10:35.028234 | orchestrator | 2025-05-29 01:10:35 | INFO  | Task 977ef4a5-1708-4276-b51b-8d515656c2d7 is in state STARTED 2025-05-29 01:10:35.028336 | orchestrator | 2025-05-29 01:10:35 | INFO  | Task 87cb6d19-a793-4f16-8095-99551423e085 is in state STARTED 2025-05-29 01:10:35.029127 | orchestrator | 2025-05-29 01:10:35 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:10:35.029154 | orchestrator | 2025-05-29 01:10:35 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:10:38.075606 | orchestrator | 2025-05-29 01:10:38 | INFO  | Task 977ef4a5-1708-4276-b51b-8d515656c2d7 is in state STARTED 2025-05-29 01:10:38.075751 | orchestrator | 2025-05-29 01:10:38 | INFO  | Task 87cb6d19-a793-4f16-8095-99551423e085 is in state STARTED 2025-05-29 01:10:38.075881 | orchestrator | 2025-05-29 01:10:38 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:10:38.075899 | orchestrator | 2025-05-29 01:10:38 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:10:41.130368 | orchestrator | 2025-05-29 01:10:41 | INFO  | Task 977ef4a5-1708-4276-b51b-8d515656c2d7 is in state STARTED 2025-05-29 01:10:41.132933 | orchestrator | 2025-05-29 01:10:41 | INFO  | Task 87cb6d19-a793-4f16-8095-99551423e085 is in state SUCCESS 2025-05-29 01:10:41.134537 | orchestrator | 2025-05-29 01:10:41.134580 | orchestrator | 2025-05-29 01:10:41.134592 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-29 01:10:41.134604 | orchestrator | 2025-05-29 01:10:41.134641 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-29 01:10:41.134653 | orchestrator | Thursday 29 May 2025 01:08:40 +0000 (0:00:00.391) 0:00:00.391 ********** 2025-05-29 01:10:41.134664 | orchestrator | ok: [testbed-node-0] 2025-05-29 01:10:41.134677 | orchestrator | ok: [testbed-node-1] 2025-05-29 01:10:41.134688 | orchestrator | ok: [testbed-node-2] 2025-05-29 01:10:41.134698 | orchestrator | 2025-05-29 01:10:41.134709 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-29 01:10:41.134720 | orchestrator | Thursday 29 May 2025 01:08:40 +0000 (0:00:00.427) 0:00:00.818 ********** 2025-05-29 01:10:41.134731 | orchestrator | ok: [testbed-node-0] => (item=enable_octavia_True) 2025-05-29 01:10:41.134742 | orchestrator | ok: [testbed-node-1] => (item=enable_octavia_True) 2025-05-29 01:10:41.134753 | orchestrator | ok: [testbed-node-2] => (item=enable_octavia_True) 2025-05-29 01:10:41.134764 | orchestrator | 2025-05-29 01:10:41.134775 | orchestrator | PLAY [Apply role octavia] ****************************************************** 2025-05-29 01:10:41.134785 | orchestrator | 2025-05-29 01:10:41.134820 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-05-29 01:10:41.134832 | orchestrator | Thursday 29 May 2025 01:08:41 +0000 (0:00:00.293) 0:00:01.112 ********** 2025-05-29 01:10:41.134842 | orchestrator | included: /ansible/roles/octavia/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-29 01:10:41.134854 | orchestrator | 2025-05-29 01:10:41.134865 | orchestrator | TASK [service-ks-register : octavia | Creating services] *********************** 2025-05-29 01:10:41.134876 | orchestrator | Thursday 29 May 2025 01:08:41 +0000 (0:00:00.614) 0:00:01.727 ********** 2025-05-29 01:10:41.134887 | orchestrator | changed: [testbed-node-0] => (item=octavia (load-balancer)) 2025-05-29 01:10:41.134898 | orchestrator | 2025-05-29 01:10:41.134908 | orchestrator | TASK [service-ks-register : octavia | Creating endpoints] ********************** 2025-05-29 01:10:41.134919 | orchestrator | Thursday 29 May 2025 01:08:45 +0000 (0:00:03.519) 0:00:05.246 ********** 2025-05-29 01:10:41.134930 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api-int.testbed.osism.xyz:9876 -> internal) 2025-05-29 01:10:41.134941 | orchestrator | changed: [testbed-node-0] => (item=octavia -> https://api.testbed.osism.xyz:9876 -> public) 2025-05-29 01:10:41.134952 | orchestrator | 2025-05-29 01:10:41.134962 | orchestrator | TASK [service-ks-register : octavia | Creating projects] *********************** 2025-05-29 01:10:41.134973 | orchestrator | Thursday 29 May 2025 01:08:51 +0000 (0:00:06.264) 0:00:11.510 ********** 2025-05-29 01:10:41.134984 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-05-29 01:10:41.134995 | orchestrator | 2025-05-29 01:10:41.135005 | orchestrator | TASK [service-ks-register : octavia | Creating users] ************************** 2025-05-29 01:10:41.135016 | orchestrator | Thursday 29 May 2025 01:08:54 +0000 (0:00:03.334) 0:00:14.845 ********** 2025-05-29 01:10:41.135043 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-29 01:10:41.135054 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2025-05-29 01:10:41.135066 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service) 2025-05-29 01:10:41.135076 | orchestrator | 2025-05-29 01:10:41.135087 | orchestrator | TASK [service-ks-register : octavia | Creating roles] ************************** 2025-05-29 01:10:41.135098 | orchestrator | Thursday 29 May 2025 01:09:02 +0000 (0:00:08.055) 0:00:22.901 ********** 2025-05-29 01:10:41.135109 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-05-29 01:10:41.135119 | orchestrator | 2025-05-29 01:10:41.135130 | orchestrator | TASK [service-ks-register : octavia | Granting user roles] ********************* 2025-05-29 01:10:41.135141 | orchestrator | Thursday 29 May 2025 01:09:06 +0000 (0:00:03.209) 0:00:26.111 ********** 2025-05-29 01:10:41.135151 | orchestrator | changed: [testbed-node-0] => (item=octavia -> service -> admin) 2025-05-29 01:10:41.135162 | orchestrator | ok: [testbed-node-0] => (item=octavia -> service -> admin) 2025-05-29 01:10:41.135172 | orchestrator | 2025-05-29 01:10:41.135183 | orchestrator | TASK [octavia : Adding octavia related roles] ********************************** 2025-05-29 01:10:41.135222 | orchestrator | Thursday 29 May 2025 01:09:13 +0000 (0:00:07.590) 0:00:33.701 ********** 2025-05-29 01:10:41.135233 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_observer) 2025-05-29 01:10:41.135243 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_global_observer) 2025-05-29 01:10:41.135254 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_member) 2025-05-29 01:10:41.135266 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_admin) 2025-05-29 01:10:41.135288 | orchestrator | changed: [testbed-node-0] => (item=load-balancer_quota_admin) 2025-05-29 01:10:41.135299 | orchestrator | 2025-05-29 01:10:41.135454 | orchestrator | TASK [octavia : include_tasks] ************************************************* 2025-05-29 01:10:41.135470 | orchestrator | Thursday 29 May 2025 01:09:29 +0000 (0:00:15.441) 0:00:49.143 ********** 2025-05-29 01:10:41.135490 | orchestrator | included: /ansible/roles/octavia/tasks/prepare.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-29 01:10:41.135509 | orchestrator | 2025-05-29 01:10:41.135529 | orchestrator | TASK [octavia : Create amphora flavor] ***************************************** 2025-05-29 01:10:41.135549 | orchestrator | Thursday 29 May 2025 01:09:30 +0000 (0:00:00.982) 0:00:50.126 ********** 2025-05-29 01:10:41.135586 | orchestrator | fatal: [testbed-node-0]: FAILED! => {"action": "os_nova_flavor", "changed": false, "extra_data": {"data": null, "details": "503 Service Unavailable: No server is available to handle this request.: ", "response": "

503 Service Unavailable

\nNo server is available to handle this request.\n\n"}, "msg": "HttpException: 503: Server Error for url: https://api-int.testbed.osism.xyz:8774/v2.1/flavors/amphora, 503 Service Unavailable: No server is available to handle this request.: "} 2025-05-29 01:10:41.135602 | orchestrator | 2025-05-29 01:10:41.135614 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-29 01:10:41.135626 | orchestrator | testbed-node-0 : ok=11  changed=5  unreachable=0 failed=1  skipped=0 rescued=0 ignored=0 2025-05-29 01:10:41.135637 | orchestrator | testbed-node-1 : ok=4  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-29 01:10:41.135649 | orchestrator | testbed-node-2 : ok=4  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-29 01:10:41.135660 | orchestrator | 2025-05-29 01:10:41.135671 | orchestrator | 2025-05-29 01:10:41.135681 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-29 01:10:41.135692 | orchestrator | Thursday 29 May 2025 01:09:33 +0000 (0:00:03.305) 0:00:53.431 ********** 2025-05-29 01:10:41.135703 | orchestrator | =============================================================================== 2025-05-29 01:10:41.135713 | orchestrator | octavia : Adding octavia related roles --------------------------------- 15.44s 2025-05-29 01:10:41.135724 | orchestrator | service-ks-register : octavia | Creating users -------------------------- 8.06s 2025-05-29 01:10:41.135734 | orchestrator | service-ks-register : octavia | Granting user roles --------------------- 7.59s 2025-05-29 01:10:41.135745 | orchestrator | service-ks-register : octavia | Creating endpoints ---------------------- 6.26s 2025-05-29 01:10:41.135756 | orchestrator | service-ks-register : octavia | Creating services ----------------------- 3.52s 2025-05-29 01:10:41.135767 | orchestrator | service-ks-register : octavia | Creating projects ----------------------- 3.33s 2025-05-29 01:10:41.135777 | orchestrator | octavia : Create amphora flavor ----------------------------------------- 3.31s 2025-05-29 01:10:41.135809 | orchestrator | service-ks-register : octavia | Creating roles -------------------------- 3.21s 2025-05-29 01:10:41.135821 | orchestrator | octavia : include_tasks ------------------------------------------------- 0.98s 2025-05-29 01:10:41.135831 | orchestrator | octavia : include_tasks ------------------------------------------------- 0.61s 2025-05-29 01:10:41.135842 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.43s 2025-05-29 01:10:41.135852 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.29s 2025-05-29 01:10:41.135872 | orchestrator | 2025-05-29 01:10:41.135882 | orchestrator | 2025-05-29 01:10:41.135893 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-29 01:10:41.135904 | orchestrator | 2025-05-29 01:10:41.135914 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-29 01:10:41.135932 | orchestrator | Thursday 29 May 2025 01:08:06 +0000 (0:00:00.262) 0:00:00.262 ********** 2025-05-29 01:10:41.135943 | orchestrator | ok: [testbed-node-0] 2025-05-29 01:10:41.135954 | orchestrator | ok: [testbed-node-1] 2025-05-29 01:10:41.135964 | orchestrator | ok: [testbed-node-2] 2025-05-29 01:10:41.135975 | orchestrator | 2025-05-29 01:10:41.135986 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-29 01:10:41.135996 | orchestrator | Thursday 29 May 2025 01:08:06 +0000 (0:00:00.547) 0:00:00.810 ********** 2025-05-29 01:10:41.136037 | orchestrator | ok: [testbed-node-0] => (item=enable_nova_True) 2025-05-29 01:10:41.136048 | orchestrator | ok: [testbed-node-1] => (item=enable_nova_True) 2025-05-29 01:10:41.136062 | orchestrator | ok: [testbed-node-2] => (item=enable_nova_True) 2025-05-29 01:10:41.136080 | orchestrator | 2025-05-29 01:10:41.136091 | orchestrator | PLAY [Wait for the Nova service] *********************************************** 2025-05-29 01:10:41.136180 | orchestrator | 2025-05-29 01:10:41.136191 | orchestrator | TASK [Waiting for Nova public port to be UP] *********************************** 2025-05-29 01:10:41.136202 | orchestrator | Thursday 29 May 2025 01:08:07 +0000 (0:00:00.714) 0:00:01.525 ********** 2025-05-29 01:10:41.136213 | orchestrator | ok: [testbed-node-0] 2025-05-29 01:10:41.136223 | orchestrator | ok: [testbed-node-2] 2025-05-29 01:10:41.136234 | orchestrator | ok: [testbed-node-1] 2025-05-29 01:10:41.136245 | orchestrator | 2025-05-29 01:10:41.136256 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-29 01:10:41.136270 | orchestrator | testbed-node-0 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-29 01:10:41.136324 | orchestrator | testbed-node-1 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-29 01:10:41.136338 | orchestrator | testbed-node-2 : ok=3  changed=0 unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-29 01:10:41.136350 | orchestrator | 2025-05-29 01:10:41.136369 | orchestrator | 2025-05-29 01:10:41.136451 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-29 01:10:41.136464 | orchestrator | Thursday 29 May 2025 01:10:20 +0000 (0:02:13.343) 0:02:14.869 ********** 2025-05-29 01:10:41.136475 | orchestrator | =============================================================================== 2025-05-29 01:10:41.136486 | orchestrator | Waiting for Nova public port to be UP --------------------------------- 133.34s 2025-05-29 01:10:41.136497 | orchestrator | Group hosts based on enabled services ----------------------------------- 0.71s 2025-05-29 01:10:41.136508 | orchestrator | Group hosts based on Kolla action --------------------------------------- 0.55s 2025-05-29 01:10:41.136519 | orchestrator | 2025-05-29 01:10:41.136529 | orchestrator | 2025-05-29 01:10:41.136540 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-29 01:10:41.136551 | orchestrator | 2025-05-29 01:10:41.136562 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-29 01:10:41.136583 | orchestrator | Thursday 29 May 2025 01:08:40 +0000 (0:00:00.307) 0:00:00.307 ********** 2025-05-29 01:10:41.136594 | orchestrator | ok: [testbed-node-0] 2025-05-29 01:10:41.136605 | orchestrator | ok: [testbed-node-1] 2025-05-29 01:10:41.136615 | orchestrator | ok: [testbed-node-2] 2025-05-29 01:10:41.136626 | orchestrator | 2025-05-29 01:10:41.136637 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-29 01:10:41.136647 | orchestrator | Thursday 29 May 2025 01:08:41 +0000 (0:00:00.320) 0:00:00.628 ********** 2025-05-29 01:10:41.136658 | orchestrator | ok: [testbed-node-0] => (item=enable_grafana_True) 2025-05-29 01:10:41.136677 | orchestrator | ok: [testbed-node-1] => (item=enable_grafana_True) 2025-05-29 01:10:41.136688 | orchestrator | ok: [testbed-node-2] => (item=enable_grafana_True) 2025-05-29 01:10:41.136699 | orchestrator | 2025-05-29 01:10:41.136710 | orchestrator | PLAY [Apply role grafana] ****************************************************** 2025-05-29 01:10:41.136720 | orchestrator | 2025-05-29 01:10:41.136731 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2025-05-29 01:10:41.136742 | orchestrator | Thursday 29 May 2025 01:08:41 +0000 (0:00:00.206) 0:00:00.834 ********** 2025-05-29 01:10:41.136753 | orchestrator | included: /ansible/roles/grafana/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-29 01:10:41.136780 | orchestrator | 2025-05-29 01:10:41.136811 | orchestrator | TASK [grafana : Ensuring config directories exist] ***************************** 2025-05-29 01:10:41.136822 | orchestrator | Thursday 29 May 2025 01:08:41 +0000 (0:00:00.538) 0:00:01.373 ********** 2025-05-29 01:10:41.136835 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-29 01:10:41.136856 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-29 01:10:41.136886 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-29 01:10:41.136898 | orchestrator | 2025-05-29 01:10:41.136908 | orchestrator | TASK [grafana : Check if extra configuration file exists] ********************** 2025-05-29 01:10:41.136919 | orchestrator | Thursday 29 May 2025 01:08:42 +0000 (0:00:00.725) 0:00:02.099 ********** 2025-05-29 01:10:41.136929 | orchestrator | [WARNING]: Skipped '/operations/prometheus/grafana' path due to this access 2025-05-29 01:10:41.136940 | orchestrator | issue: '/operations/prometheus/grafana' is not a directory 2025-05-29 01:10:41.136951 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-29 01:10:41.136962 | orchestrator | 2025-05-29 01:10:41.136972 | orchestrator | TASK [grafana : include_tasks] ************************************************* 2025-05-29 01:10:41.136983 | orchestrator | Thursday 29 May 2025 01:08:43 +0000 (0:00:00.478) 0:00:02.577 ********** 2025-05-29 01:10:41.136993 | orchestrator | included: /ansible/roles/grafana/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-29 01:10:41.137004 | orchestrator | 2025-05-29 01:10:41.137015 | orchestrator | TASK [service-cert-copy : grafana | Copying over extra CA certificates] ******** 2025-05-29 01:10:41.137034 | orchestrator | Thursday 29 May 2025 01:08:43 +0000 (0:00:00.508) 0:00:03.086 ********** 2025-05-29 01:10:41.137056 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-29 01:10:41.137068 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-29 01:10:41.137080 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-29 01:10:41.137091 | orchestrator | 2025-05-29 01:10:41.137102 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS certificate] *** 2025-05-29 01:10:41.137112 | orchestrator | Thursday 29 May 2025 01:08:45 +0000 (0:00:01.390) 0:00:04.476 ********** 2025-05-29 01:10:41.137129 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-05-29 01:10:41.137141 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:10:41.137152 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-05-29 01:10:41.137171 | orchestrator | skipping: [testbed-node-1] 2025-05-29 01:10:41.137189 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-05-29 01:10:41.137201 | orchestrator | skipping: [testbed-node-2] 2025-05-29 01:10:41.137212 | orchestrator | 2025-05-29 01:10:41.137223 | orchestrator | TASK [service-cert-copy : grafana | Copying over backend internal TLS key] ***** 2025-05-29 01:10:41.137233 | orchestrator | Thursday 29 May 2025 01:08:45 +0000 (0:00:00.847) 0:00:05.323 ********** 2025-05-29 01:10:41.137244 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-05-29 01:10:41.137256 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:10:41.137267 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-05-29 01:10:41.137278 | orchestrator | skipping: [testbed-node-1] 2025-05-29 01:10:41.137294 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}})  2025-05-29 01:10:41.137305 | orchestrator | skipping: [testbed-node-2] 2025-05-29 01:10:41.137316 | orchestrator | 2025-05-29 01:10:41.137327 | orchestrator | TASK [grafana : Copying over config.json files] ******************************** 2025-05-29 01:10:41.137338 | orchestrator | Thursday 29 May 2025 01:08:46 +0000 (0:00:00.922) 0:00:06.246 ********** 2025-05-29 01:10:41.137349 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-29 01:10:41.137367 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-29 01:10:41.137385 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-29 01:10:41.137398 | orchestrator | 2025-05-29 01:10:41.137409 | orchestrator | TASK [grafana : Copying over grafana.ini] ************************************** 2025-05-29 01:10:41.137420 | orchestrator | Thursday 29 May 2025 01:08:48 +0000 (0:00:01.472) 0:00:07.718 ********** 2025-05-29 01:10:41.137431 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-29 01:10:41.137442 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-29 01:10:41.137458 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-29 01:10:41.137477 | orchestrator | 2025-05-29 01:10:41.137488 | orchestrator | TASK [grafana : Copying over extra configuration file] ************************* 2025-05-29 01:10:41.137499 | orchestrator | Thursday 29 May 2025 01:08:49 +0000 (0:00:01.649) 0:00:09.368 ********** 2025-05-29 01:10:41.137509 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:10:41.137520 | orchestrator | skipping: [testbed-node-1] 2025-05-29 01:10:41.137531 | orchestrator | skipping: [testbed-node-2] 2025-05-29 01:10:41.137541 | orchestrator | 2025-05-29 01:10:41.137552 | orchestrator | TASK [grafana : Configuring Prometheus as data source for Grafana] ************* 2025-05-29 01:10:41.137563 | orchestrator | Thursday 29 May 2025 01:08:50 +0000 (0:00:00.322) 0:00:09.690 ********** 2025-05-29 01:10:41.137573 | orchestrator | changed: [testbed-node-0] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-05-29 01:10:41.137584 | orchestrator | changed: [testbed-node-1] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-05-29 01:10:41.137595 | orchestrator | changed: [testbed-node-2] => (item=/ansible/roles/grafana/templates/prometheus.yaml.j2) 2025-05-29 01:10:41.137605 | orchestrator | 2025-05-29 01:10:41.137616 | orchestrator | TASK [grafana : Configuring dashboards provisioning] *************************** 2025-05-29 01:10:41.137627 | orchestrator | Thursday 29 May 2025 01:08:51 +0000 (0:00:01.380) 0:00:11.071 ********** 2025-05-29 01:10:41.137637 | orchestrator | changed: [testbed-node-0] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-05-29 01:10:41.137648 | orchestrator | changed: [testbed-node-1] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-05-29 01:10:41.137659 | orchestrator | changed: [testbed-node-2] => (item=/opt/configuration/environments/kolla/files/overlays/grafana/provisioning.yaml) 2025-05-29 01:10:41.137670 | orchestrator | 2025-05-29 01:10:41.137686 | orchestrator | TASK [grafana : Find custom grafana dashboards] ******************************** 2025-05-29 01:10:41.137697 | orchestrator | Thursday 29 May 2025 01:08:53 +0000 (0:00:01.394) 0:00:12.465 ********** 2025-05-29 01:10:41.137708 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-29 01:10:41.137719 | orchestrator | 2025-05-29 01:10:41.137730 | orchestrator | TASK [grafana : Find templated grafana dashboards] ***************************** 2025-05-29 01:10:41.137740 | orchestrator | Thursday 29 May 2025 01:08:53 +0000 (0:00:00.472) 0:00:12.938 ********** 2025-05-29 01:10:41.137751 | orchestrator | [WARNING]: Skipped '/etc/kolla/grafana/dashboards' path due to this access 2025-05-29 01:10:41.137762 | orchestrator | issue: '/etc/kolla/grafana/dashboards' is not a directory 2025-05-29 01:10:41.137773 | orchestrator | ok: [testbed-node-0] 2025-05-29 01:10:41.137783 | orchestrator | ok: [testbed-node-1] 2025-05-29 01:10:41.137871 | orchestrator | ok: [testbed-node-2] 2025-05-29 01:10:41.137882 | orchestrator | 2025-05-29 01:10:41.137894 | orchestrator | TASK [grafana : Prune templated Grafana dashboards] **************************** 2025-05-29 01:10:41.137904 | orchestrator | Thursday 29 May 2025 01:08:54 +0000 (0:00:00.923) 0:00:13.861 ********** 2025-05-29 01:10:41.137915 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:10:41.137926 | orchestrator | skipping: [testbed-node-1] 2025-05-29 01:10:41.137937 | orchestrator | skipping: [testbed-node-2] 2025-05-29 01:10:41.137947 | orchestrator | 2025-05-29 01:10:41.137958 | orchestrator | TASK [grafana : Copying over custom dashboards] ******************************** 2025-05-29 01:10:41.137969 | orchestrator | Thursday 29 May 2025 01:08:54 +0000 (0:00:00.513) 0:00:14.375 ********** 2025-05-29 01:10:41.137981 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1326055, 'dev': 209, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748477716.7763972, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-29 01:10:41.138061 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1326055, 'dev': 209, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748477716.7763972, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-29 01:10:41.138076 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rgw-s3-analytics.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rgw-s3-analytics.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 167897, 'inode': 1326055, 'dev': 209, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748477716.7763972, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-29 01:10:41.138087 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1326047, 'dev': 209, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748477716.771397, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-29 01:10:41.138106 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1326047, 'dev': 209, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748477716.771397, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-29 01:10:41.138117 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19695, 'inode': 1326047, 'dev': 209, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748477716.771397, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-29 01:10:41.138127 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1325518, 'dev': 209, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748477716.565396, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-29 01:10:41.138144 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1325518, 'dev': 209, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748477716.565396, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-29 01:10:41.138159 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osds-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osds-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38432, 'inode': 1325518, 'dev': 209, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748477716.565396, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-29 01:10:41.138169 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1326051, 'dev': 209, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748477716.7723973, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-29 01:10:41.138179 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1326051, 'dev': 209, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748477716.7723973, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-29 01:10:41.138998 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 12997, 'inode': 1326051, 'dev': 209, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748477716.7723973, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-29 01:10:41.139097 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1325510, 'dev': 209, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748477716.560396, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-29 01:10:41.139113 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1325510, 'dev': 209, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748477716.560396, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-29 01:10:41.139165 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/host-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/host-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 44791, 'inode': 1325510, 'dev': 209, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748477716.560396, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-29 01:10:41.139179 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1325520, 'dev': 209, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748477716.566396, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-29 01:10:41.139191 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1325520, 'dev': 209, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748477716.566396, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-29 01:10:41.139223 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-detail.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-detail.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 19609, 'inode': 1325520, 'dev': 209, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748477716.566396, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-29 01:10:41.139245 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1326050, 'dev': 209, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748477716.7723973, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-29 01:10:41.139265 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1326050, 'dev': 209, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748477716.7723973, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-29 01:10:41.139307 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-sync-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-sync-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 16156, 'inode': 1326050, 'dev': 209, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748477716.7723973, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-29 01:10:41.139328 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1325509, 'dev': 209, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748477716.560396, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-29 01:10:41.139348 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1325509, 'dev': 209, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748477716.560396, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-29 01:10:41.139367 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/cephfs-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/cephfs-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 9025, 'inode': 1325509, 'dev': 209, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748477716.560396, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-29 01:10:41.139380 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1325478, 'dev': 209, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748477716.551396, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-29 01:10:41.139391 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1325478, 'dev': 209, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748477716.551396, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-29 01:10:41.139411 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1325513, 'dev': 209, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748477716.561396, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-29 01:10:41.139429 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/README.md', 'value': {'path': '/operations/grafana/dashboards/ceph/README.md', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 84, 'inode': 1325478, 'dev': 209, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748477716.551396, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-29 01:10:41.139440 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1325513, 'dev': 209, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748477716.561396, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-29 01:10:41.139452 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1325491, 'dev': 209, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748477716.5553958, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-29 01:10:41.139472 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/hosts-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/hosts-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 27218, 'inode': 1325513, 'dev': 209, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748477716.561396, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-29 01:10:41.139484 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1325491, 'dev': 209, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748477716.5553958, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-29 01:10:41.139503 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39370, 'inode': 1326049, 'dev': 209, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1748477716.771397, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-29 01:10:41.139521 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 34113, 'inode': 1325491, 'dev': 209, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748477716.5553958, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-29 01:10:41.139534 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39370, 'inode': 1326049, 'dev': 209, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1748477716.771397, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-29 01:10:41.139565 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62371, 'inode': 1325516, 'dev': 209, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1748477716.562396, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-29 01:10:41.139597 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/radosgw-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/radosgw-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 39370, 'inode': 1326049, 'dev': 209, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1748477716.771397, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-29 01:10:41.139611 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62371, 'inode': 1325516, 'dev': 209, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1748477716.562396, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-29 01:10:41.139631 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1326052, 'dev': 209, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748477716.7743971, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-29 01:10:41.139645 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/multi-cluster-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/multi-cluster-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 62371, 'inode': 1325516, 'dev': 209, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1748477716.562396, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-29 01:10:41.139663 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1326052, 'dev': 209, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748477716.7743971, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-29 01:10:41.139676 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1325506, 'dev': 209, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748477716.559396, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-29 01:10:41.139689 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/rbd-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/rbd-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25686, 'inode': 1326052, 'dev': 209, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748477716.7743971, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-29 01:10:41.139711 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1325506, 'dev': 209, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748477716.559396, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-29 01:10:41.139730 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1325522, 'dev': 209, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748477716.7693973, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-29 01:10:41.139744 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_pools.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_pools.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 25279, 'inode': 1325506, 'dev': 209, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748477716.559396, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-29 01:10:41.139761 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1325522, 'dev': 209, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748477716.7693973, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-29 01:10:41.139775 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1325482, 'dev': 209, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748477716.554396, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-29 01:10:41.139814 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/pool-overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/pool-overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 49139, 'inode': 1325522, 'dev': 209, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748477716.7693973, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-29 01:10:41.139837 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1325482, 'dev': 209, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748477716.554396, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-29 01:10:41.139857 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1325496, 'dev': 209, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748477716.5583959, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-29 01:10:41.139870 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph-cluster-advanced.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph-cluster-advanced.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 117836, 'inode': 1325482, 'dev': 209, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748477716.554396, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-29 01:10:41.139886 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1325496, 'dev': 209, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748477716.5583959, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-29 01:10:41.139898 | orchestrator | changed: [testbed-node-0] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1325517, 'dev': 209, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748477716.563396, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-29 01:10:41.139909 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/ceph_overview.json', 'value': {'path': '/operations/grafana/dashboards/ceph/ceph_overview.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 80386, 'inode': 1325496, 'dev': 209, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748477716.5583959, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-29 01:10:41.139927 | orchestrator | changed: [testbed-node-2] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1325517, 'dev': 209, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748477716.563396, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-29 01:10:41.139951 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1326075, 'dev': 209, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748477716.7973974, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-29 01:10:41.139963 | orchestrator | changed: [testbed-node-1] => (item={'key': 'ceph/osd-device-details.json', 'value': {'path': '/operations/grafana/dashboards/ceph/osd-device-details.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 26655, 'inode': 1325517, 'dev': 209, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748477716.563396, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-29 01:10:41.139975 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1326075, 'dev': 209, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748477716.7973974, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-29 01:10:41.139991 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1326071, 'dev': 209, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748477716.7893972, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-29 01:10:41.140003 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_full.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_full.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 682774, 'inode': 1326075, 'dev': 209, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748477716.7973974, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-29 01:10:41.140014 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1326071, 'dev': 209, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748477716.7893972, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-29 01:10:41.140043 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1326079, 'dev': 209, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748477716.8003974, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-29 01:10:41.140055 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/libvirt.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/libvirt.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 29672, 'inode': 1326071, 'dev': 209, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748477716.7893972, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-29 01:10:41.140066 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1326079, 'dev': 209, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748477716.8003974, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-29 01:10:41.140082 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1326059, 'dev': 209, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748477716.7783973, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-29 01:10:41.140094 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus_alertmanager.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus_alertmanager.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 115472, 'inode': 1326079, 'dev': 209, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748477716.8003974, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-29 01:10:41.140106 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1326059, 'dev': 209, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748477716.7783973, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-29 01:10:41.140130 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1326080, 'dev': 209, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748477716.8023973, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-29 01:10:41.140142 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/blackbox.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/blackbox.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 31128, 'inode': 1326059, 'dev': 209, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748477716.7783973, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-29 01:10:41.140154 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1326080, 'dev': 209, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748477716.8023973, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-29 01:10:41.140170 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1326076, 'dev': 209, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748477716.7983973, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-29 01:10:41.140183 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/rabbitmq.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/rabbitmq.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 222049, 'inode': 1326080, 'dev': 209, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748477716.8023973, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-29 01:10:41.140194 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1326076, 'dev': 209, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748477716.7983973, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-29 01:10:41.140219 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1326077, 'dev': 209, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748477716.7983973, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-29 01:10:41.140231 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/node_exporter_side_by_side.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/node_exporter_side_by_side.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 70691, 'inode': 1326076, 'dev': 209, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748477716.7983973, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-29 01:10:41.140243 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1326077, 'dev': 209, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748477716.7983973, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-29 01:10:41.140259 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1326060, 'dev': 209, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748477716.7783973, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-29 01:10:41.140271 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/opensearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/opensearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 65458, 'inode': 1326077, 'dev': 209, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748477716.7983973, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-29 01:10:41.140291 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1326060, 'dev': 209, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748477716.7783973, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-29 01:10:41.140331 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1326074, 'dev': 209, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748477716.7893972, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-29 01:10:41.140354 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/cadvisor.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/cadvisor.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 53882, 'inode': 1326060, 'dev': 209, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748477716.7783973, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-29 01:10:41.140375 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1326074, 'dev': 209, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748477716.7893972, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-29 01:10:41.140398 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1326081, 'dev': 209, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748477716.8033974, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-29 01:10:41.140410 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/memcached.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/memcached.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 24243, 'inode': 1326074, 'dev': 209, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748477716.7893972, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-29 01:10:41.140422 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1326081, 'dev': 209, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748477716.8033974, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-29 01:10:41.140450 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 100249, 'inode': 1326078, 'dev': 209, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1748477716.7993972, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-29 01:10:41.140462 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/redfish.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/redfish.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 38087, 'inode': 1326081, 'dev': 209, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748477716.8033974, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-29 01:10:41.140473 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 100249, 'inode': 1326078, 'dev': 209, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1748477716.7993972, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-29 01:10:41.140484 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1326064, 'dev': 209, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748477716.7823973, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-29 01:10:41.140501 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/prometheus.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/prometheus.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 100249, 'inode': 1326078, 'dev': 209, 'nlink': 1, 'atime': 1737057119.0, 'mtime': 1737057119.0, 'ctime': 1748477716.7993972, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-29 01:10:41.140512 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1326064, 'dev': 209, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748477716.7823973, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-29 01:10:41.140531 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1326063, 'dev': 209, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748477716.7793972, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-29 01:10:41.140549 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/elasticsearch.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/elasticsearch.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 187864, 'inode': 1326064, 'dev': 209, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748477716.7823973, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-29 01:10:41.140561 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1326066, 'dev': 209, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748477716.7843971, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-29 01:10:41.140572 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1326063, 'dev': 209, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748477716.7793972, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-29 01:10:41.140588 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/database.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/database.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 30898, 'inode': 1326063, 'dev': 209, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748477716.7793972, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-29 01:10:41.140599 | orchestrator | changed: [testbed-node-0] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1326067, 'dev': 209, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748477716.7883973, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-29 01:10:41.140617 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1326066, 'dev': 209, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748477716.7843971, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-29 01:10:41.140637 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/fluentd.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/fluentd.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 82960, 'inode': 1326066, 'dev': 209, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748477716.7843971, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-29 01:10:41.140649 | orchestrator | changed: [testbed-node-0] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1326086, 'dev': 209, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748477716.8043973, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-29 01:10:41.140660 | orchestrator | changed: [testbed-node-2] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1326067, 'dev': 209, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748477716.7883973, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-29 01:10:41.140680 | orchestrator | changed: [testbed-node-1] => (item={'key': 'infrastructure/haproxy.json', 'value': {'path': '/operations/grafana/dashboards/infrastructure/haproxy.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 410814, 'inode': 1326067, 'dev': 209, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748477716.7883973, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-29 01:10:41.140692 | orchestrator | changed: [testbed-node-2] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1326086, 'dev': 209, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748477716.8043973, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-29 01:10:41.140710 | orchestrator | changed: [testbed-node-1] => (item={'key': 'openstack/openstack.json', 'value': {'path': '/operations/grafana/dashboards/openstack/openstack.json', 'mode': '0644', 'isdir': False, 'ischr': False, 'isblk': False, 'isreg': True, 'isfifo': False, 'islnk': False, 'issock': False, 'uid': 0, 'gid': 0, 'size': 57270, 'inode': 1326086, 'dev': 209, 'nlink': 1, 'atime': 1737057118.0, 'mtime': 1737057118.0, 'ctime': 1748477716.8043973, 'gr_name': 'root', 'pw_name': 'root', 'wusr': True, 'rusr': True, 'xusr': False, 'wgrp': False, 'rgrp': True, 'xgrp': False, 'woth': False, 'roth': True, 'xoth': False, 'isuid': False, 'isgid': False}}) 2025-05-29 01:10:41.140723 | orchestrator | 2025-05-29 01:10:41.140736 | orchestrator | TASK [grafana : Check grafana containers] ************************************** 2025-05-29 01:10:41.140748 | orchestrator | Thursday 29 May 2025 01:09:28 +0000 (0:00:33.910) 0:00:48.286 ********** 2025-05-29 01:10:41.140768 | orchestrator | changed: [testbed-node-1] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-29 01:10:41.140780 | orchestrator | changed: [testbed-node-0] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-29 01:10:41.140816 | orchestrator | changed: [testbed-node-2] => (item={'key': 'grafana', 'value': {'container_name': 'grafana', 'group': 'grafana', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/grafana:11.4.0.20241206', 'volumes': ['/etc/kolla/grafana/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/'], 'dimensions': {}, 'haproxy': {'grafana_server': {'enabled': 'yes', 'mode': 'http', 'external': False, 'port': '3000', 'listen_port': '3000'}, 'grafana_server_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '3000', 'listen_port': '3000'}}}}) 2025-05-29 01:10:41.140828 | orchestrator | 2025-05-29 01:10:41.140840 | orchestrator | TASK [grafana : Creating grafana database] ************************************* 2025-05-29 01:10:41.140851 | orchestrator | Thursday 29 May 2025 01:09:29 +0000 (0:00:01.068) 0:00:49.355 ********** 2025-05-29 01:10:41.140862 | orchestrator | changed: [testbed-node-0] 2025-05-29 01:10:41.140874 | orchestrator | 2025-05-29 01:10:41.140885 | orchestrator | TASK [grafana : Creating grafana database user and setting permissions] ******** 2025-05-29 01:10:41.140896 | orchestrator | Thursday 29 May 2025 01:09:32 +0000 (0:00:02.661) 0:00:52.016 ********** 2025-05-29 01:10:41.140912 | orchestrator | changed: [testbed-node-0] 2025-05-29 01:10:41.140923 | orchestrator | 2025-05-29 01:10:41.140934 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-05-29 01:10:41.140952 | orchestrator | Thursday 29 May 2025 01:09:34 +0000 (0:00:02.186) 0:00:54.203 ********** 2025-05-29 01:10:41.140963 | orchestrator | 2025-05-29 01:10:41.140974 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-05-29 01:10:41.140985 | orchestrator | Thursday 29 May 2025 01:09:34 +0000 (0:00:00.099) 0:00:54.303 ********** 2025-05-29 01:10:41.140996 | orchestrator | 2025-05-29 01:10:41.141007 | orchestrator | TASK [grafana : Flush handlers] ************************************************ 2025-05-29 01:10:41.141018 | orchestrator | Thursday 29 May 2025 01:09:34 +0000 (0:00:00.062) 0:00:54.366 ********** 2025-05-29 01:10:41.141029 | orchestrator | 2025-05-29 01:10:41.141040 | orchestrator | RUNNING HANDLER [grafana : Restart first grafana container] ******************** 2025-05-29 01:10:41.141051 | orchestrator | Thursday 29 May 2025 01:09:35 +0000 (0:00:00.253) 0:00:54.619 ********** 2025-05-29 01:10:41.141062 | orchestrator | skipping: [testbed-node-1] 2025-05-29 01:10:41.141074 | orchestrator | skipping: [testbed-node-2] 2025-05-29 01:10:41.141085 | orchestrator | changed: [testbed-node-0] 2025-05-29 01:10:41.141096 | orchestrator | 2025-05-29 01:10:41.141107 | orchestrator | RUNNING HANDLER [grafana : Waiting for grafana to start on first node] ********* 2025-05-29 01:10:41.141118 | orchestrator | Thursday 29 May 2025 01:09:37 +0000 (0:00:01.843) 0:00:56.463 ********** 2025-05-29 01:10:41.141129 | orchestrator | skipping: [testbed-node-1] 2025-05-29 01:10:41.141140 | orchestrator | skipping: [testbed-node-2] 2025-05-29 01:10:41.141152 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (12 retries left). 2025-05-29 01:10:41.141163 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (11 retries left). 2025-05-29 01:10:41.141174 | orchestrator | FAILED - RETRYING: [testbed-node-0]: Waiting for grafana to start on first node (10 retries left). 2025-05-29 01:10:41.141185 | orchestrator | ok: [testbed-node-0] 2025-05-29 01:10:41.141197 | orchestrator | 2025-05-29 01:10:41.141208 | orchestrator | RUNNING HANDLER [grafana : Restart remaining grafana containers] *************** 2025-05-29 01:10:41.141219 | orchestrator | Thursday 29 May 2025 01:10:15 +0000 (0:00:38.593) 0:01:35.056 ********** 2025-05-29 01:10:41.141230 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:10:41.141241 | orchestrator | changed: [testbed-node-1] 2025-05-29 01:10:41.141252 | orchestrator | changed: [testbed-node-2] 2025-05-29 01:10:41.141263 | orchestrator | 2025-05-29 01:10:41.141274 | orchestrator | TASK [grafana : Wait for grafana application ready] **************************** 2025-05-29 01:10:41.141285 | orchestrator | Thursday 29 May 2025 01:10:34 +0000 (0:00:19.077) 0:01:54.134 ********** 2025-05-29 01:10:41.141296 | orchestrator | ok: [testbed-node-0] 2025-05-29 01:10:41.141307 | orchestrator | 2025-05-29 01:10:41.141318 | orchestrator | TASK [grafana : Remove old grafana docker volume] ****************************** 2025-05-29 01:10:41.141412 | orchestrator | Thursday 29 May 2025 01:10:36 +0000 (0:00:02.150) 0:01:56.284 ********** 2025-05-29 01:10:41.141431 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:10:41.141442 | orchestrator | skipping: [testbed-node-1] 2025-05-29 01:10:41.141453 | orchestrator | skipping: [testbed-node-2] 2025-05-29 01:10:41.141464 | orchestrator | 2025-05-29 01:10:41.141475 | orchestrator | TASK [grafana : Enable grafana datasources] ************************************ 2025-05-29 01:10:41.141486 | orchestrator | Thursday 29 May 2025 01:10:37 +0000 (0:00:00.591) 0:01:56.876 ********** 2025-05-29 01:10:41.141498 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'influxdb', 'value': {'enabled': False, 'data': {'isDefault': True, 'database': 'telegraf', 'name': 'telegraf', 'type': 'influxdb', 'url': 'https://api-int.testbed.osism.xyz:8086', 'access': 'proxy', 'basicAuth': False}}})  2025-05-29 01:10:41.141510 | orchestrator | changed: [testbed-node-0] => (item={'key': 'opensearch', 'value': {'enabled': True, 'data': {'name': 'opensearch', 'type': 'grafana-opensearch-datasource', 'access': 'proxy', 'url': 'https://api-int.testbed.osism.xyz:9200', 'jsonData': {'flavor': 'OpenSearch', 'database': 'flog-*', 'version': '2.11.1', 'timeField': '@timestamp', 'logLevelField': 'log_level'}}}}) 2025-05-29 01:10:41.141530 | orchestrator | 2025-05-29 01:10:41.141541 | orchestrator | TASK [grafana : Disable Getting Started panel] ********************************* 2025-05-29 01:10:41.141552 | orchestrator | Thursday 29 May 2025 01:10:39 +0000 (0:00:02.385) 0:01:59.262 ********** 2025-05-29 01:10:41.141563 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:10:41.141573 | orchestrator | 2025-05-29 01:10:41.141584 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-29 01:10:41.141596 | orchestrator | testbed-node-0 : ok=21  changed=12  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-05-29 01:10:41.141608 | orchestrator | testbed-node-1 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-05-29 01:10:41.141619 | orchestrator | testbed-node-2 : ok=14  changed=9  unreachable=0 failed=0 skipped=7  rescued=0 ignored=0 2025-05-29 01:10:41.141629 | orchestrator | 2025-05-29 01:10:41.141640 | orchestrator | 2025-05-29 01:10:41.141652 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-29 01:10:41.141662 | orchestrator | Thursday 29 May 2025 01:10:40 +0000 (0:00:00.382) 0:01:59.645 ********** 2025-05-29 01:10:41.141673 | orchestrator | =============================================================================== 2025-05-29 01:10:41.141684 | orchestrator | grafana : Waiting for grafana to start on first node ------------------- 38.59s 2025-05-29 01:10:41.141700 | orchestrator | grafana : Copying over custom dashboards ------------------------------- 33.91s 2025-05-29 01:10:41.141712 | orchestrator | grafana : Restart remaining grafana containers ------------------------- 19.08s 2025-05-29 01:10:41.141723 | orchestrator | grafana : Creating grafana database ------------------------------------- 2.66s 2025-05-29 01:10:41.141733 | orchestrator | grafana : Enable grafana datasources ------------------------------------ 2.39s 2025-05-29 01:10:41.141744 | orchestrator | grafana : Creating grafana database user and setting permissions -------- 2.19s 2025-05-29 01:10:41.141755 | orchestrator | grafana : Wait for grafana application ready ---------------------------- 2.15s 2025-05-29 01:10:41.141766 | orchestrator | grafana : Restart first grafana container ------------------------------- 1.84s 2025-05-29 01:10:41.141776 | orchestrator | grafana : Copying over grafana.ini -------------------------------------- 1.65s 2025-05-29 01:10:41.141869 | orchestrator | grafana : Copying over config.json files -------------------------------- 1.47s 2025-05-29 01:10:41.141886 | orchestrator | grafana : Configuring dashboards provisioning --------------------------- 1.39s 2025-05-29 01:10:41.141897 | orchestrator | service-cert-copy : grafana | Copying over extra CA certificates -------- 1.39s 2025-05-29 01:10:41.141908 | orchestrator | grafana : Configuring Prometheus as data source for Grafana ------------- 1.38s 2025-05-29 01:10:41.141919 | orchestrator | grafana : Check grafana containers -------------------------------------- 1.07s 2025-05-29 01:10:41.141930 | orchestrator | grafana : Find templated grafana dashboards ----------------------------- 0.92s 2025-05-29 01:10:41.141941 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS key ----- 0.92s 2025-05-29 01:10:41.141952 | orchestrator | service-cert-copy : grafana | Copying over backend internal TLS certificate --- 0.85s 2025-05-29 01:10:41.141963 | orchestrator | grafana : Ensuring config directories exist ----------------------------- 0.73s 2025-05-29 01:10:41.141974 | orchestrator | grafana : Remove old grafana docker volume ------------------------------ 0.59s 2025-05-29 01:10:41.141985 | orchestrator | grafana : include_tasks ------------------------------------------------- 0.54s 2025-05-29 01:10:41.141996 | orchestrator | 2025-05-29 01:10:41 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:10:41.142008 | orchestrator | 2025-05-29 01:10:41 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:10:44.184987 | orchestrator | 2025-05-29 01:10:44 | INFO  | Task 977ef4a5-1708-4276-b51b-8d515656c2d7 is in state STARTED 2025-05-29 01:10:44.185569 | orchestrator | 2025-05-29 01:10:44 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:10:44.185641 | orchestrator | 2025-05-29 01:10:44 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:10:47.230263 | orchestrator | 2025-05-29 01:10:47 | INFO  | Task 977ef4a5-1708-4276-b51b-8d515656c2d7 is in state STARTED 2025-05-29 01:10:47.230913 | orchestrator | 2025-05-29 01:10:47 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:10:47.230955 | orchestrator | 2025-05-29 01:10:47 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:10:50.279120 | orchestrator | 2025-05-29 01:10:50 | INFO  | Task 977ef4a5-1708-4276-b51b-8d515656c2d7 is in state STARTED 2025-05-29 01:10:50.279218 | orchestrator | 2025-05-29 01:10:50 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:10:50.279235 | orchestrator | 2025-05-29 01:10:50 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:10:53.327082 | orchestrator | 2025-05-29 01:10:53 | INFO  | Task 977ef4a5-1708-4276-b51b-8d515656c2d7 is in state STARTED 2025-05-29 01:10:53.328187 | orchestrator | 2025-05-29 01:10:53 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:10:53.328215 | orchestrator | 2025-05-29 01:10:53 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:10:56.378679 | orchestrator | 2025-05-29 01:10:56 | INFO  | Task 977ef4a5-1708-4276-b51b-8d515656c2d7 is in state STARTED 2025-05-29 01:10:56.379593 | orchestrator | 2025-05-29 01:10:56 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:10:56.379623 | orchestrator | 2025-05-29 01:10:56 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:10:59.434445 | orchestrator | 2025-05-29 01:10:59 | INFO  | Task 977ef4a5-1708-4276-b51b-8d515656c2d7 is in state STARTED 2025-05-29 01:10:59.435778 | orchestrator | 2025-05-29 01:10:59 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:10:59.435817 | orchestrator | 2025-05-29 01:10:59 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:11:02.487284 | orchestrator | 2025-05-29 01:11:02 | INFO  | Task 977ef4a5-1708-4276-b51b-8d515656c2d7 is in state STARTED 2025-05-29 01:11:02.487703 | orchestrator | 2025-05-29 01:11:02 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:11:02.487734 | orchestrator | 2025-05-29 01:11:02 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:11:05.534688 | orchestrator | 2025-05-29 01:11:05 | INFO  | Task 977ef4a5-1708-4276-b51b-8d515656c2d7 is in state STARTED 2025-05-29 01:11:05.534775 | orchestrator | 2025-05-29 01:11:05 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:11:05.535288 | orchestrator | 2025-05-29 01:11:05 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:11:08.581213 | orchestrator | 2025-05-29 01:11:08 | INFO  | Task 977ef4a5-1708-4276-b51b-8d515656c2d7 is in state STARTED 2025-05-29 01:11:08.581427 | orchestrator | 2025-05-29 01:11:08 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:11:08.581511 | orchestrator | 2025-05-29 01:11:08 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:11:11.622947 | orchestrator | 2025-05-29 01:11:11 | INFO  | Task 977ef4a5-1708-4276-b51b-8d515656c2d7 is in state STARTED 2025-05-29 01:11:11.623054 | orchestrator | 2025-05-29 01:11:11 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:11:11.623069 | orchestrator | 2025-05-29 01:11:11 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:11:14.654845 | orchestrator | 2025-05-29 01:11:14 | INFO  | Task 977ef4a5-1708-4276-b51b-8d515656c2d7 is in state STARTED 2025-05-29 01:11:14.655943 | orchestrator | 2025-05-29 01:11:14 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:11:14.657707 | orchestrator | 2025-05-29 01:11:14 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:11:17.693324 | orchestrator | 2025-05-29 01:11:17 | INFO  | Task 977ef4a5-1708-4276-b51b-8d515656c2d7 is in state STARTED 2025-05-29 01:11:17.693594 | orchestrator | 2025-05-29 01:11:17 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:11:17.693621 | orchestrator | 2025-05-29 01:11:17 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:11:20.744649 | orchestrator | 2025-05-29 01:11:20 | INFO  | Task 977ef4a5-1708-4276-b51b-8d515656c2d7 is in state STARTED 2025-05-29 01:11:20.744834 | orchestrator | 2025-05-29 01:11:20 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:11:20.744851 | orchestrator | 2025-05-29 01:11:20 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:11:23.784807 | orchestrator | 2025-05-29 01:11:23 | INFO  | Task 977ef4a5-1708-4276-b51b-8d515656c2d7 is in state STARTED 2025-05-29 01:11:23.785376 | orchestrator | 2025-05-29 01:11:23 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:11:23.785417 | orchestrator | 2025-05-29 01:11:23 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:11:26.830221 | orchestrator | 2025-05-29 01:11:26 | INFO  | Task 977ef4a5-1708-4276-b51b-8d515656c2d7 is in state STARTED 2025-05-29 01:11:26.832105 | orchestrator | 2025-05-29 01:11:26 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:11:26.832129 | orchestrator | 2025-05-29 01:11:26 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:11:29.866790 | orchestrator | 2025-05-29 01:11:29 | INFO  | Task 977ef4a5-1708-4276-b51b-8d515656c2d7 is in state STARTED 2025-05-29 01:11:29.867228 | orchestrator | 2025-05-29 01:11:29 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:11:29.867343 | orchestrator | 2025-05-29 01:11:29 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:11:32.906235 | orchestrator | 2025-05-29 01:11:32 | INFO  | Task 977ef4a5-1708-4276-b51b-8d515656c2d7 is in state STARTED 2025-05-29 01:11:32.907135 | orchestrator | 2025-05-29 01:11:32 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:11:32.907166 | orchestrator | 2025-05-29 01:11:32 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:11:35.960499 | orchestrator | 2025-05-29 01:11:35 | INFO  | Task 977ef4a5-1708-4276-b51b-8d515656c2d7 is in state STARTED 2025-05-29 01:11:35.960596 | orchestrator | 2025-05-29 01:11:35 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:11:35.960729 | orchestrator | 2025-05-29 01:11:35 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:11:39.014408 | orchestrator | 2025-05-29 01:11:39 | INFO  | Task 977ef4a5-1708-4276-b51b-8d515656c2d7 is in state STARTED 2025-05-29 01:11:39.016313 | orchestrator | 2025-05-29 01:11:39 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:11:39.016368 | orchestrator | 2025-05-29 01:11:39 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:11:42.060849 | orchestrator | 2025-05-29 01:11:42 | INFO  | Task 977ef4a5-1708-4276-b51b-8d515656c2d7 is in state STARTED 2025-05-29 01:11:42.061888 | orchestrator | 2025-05-29 01:11:42 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:11:42.061974 | orchestrator | 2025-05-29 01:11:42 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:11:45.102201 | orchestrator | 2025-05-29 01:11:45 | INFO  | Task 977ef4a5-1708-4276-b51b-8d515656c2d7 is in state STARTED 2025-05-29 01:11:45.102509 | orchestrator | 2025-05-29 01:11:45 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:11:45.102535 | orchestrator | 2025-05-29 01:11:45 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:11:48.148768 | orchestrator | 2025-05-29 01:11:48 | INFO  | Task 977ef4a5-1708-4276-b51b-8d515656c2d7 is in state STARTED 2025-05-29 01:11:48.149519 | orchestrator | 2025-05-29 01:11:48 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:11:48.149615 | orchestrator | 2025-05-29 01:11:48 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:11:51.201182 | orchestrator | 2025-05-29 01:11:51 | INFO  | Task 977ef4a5-1708-4276-b51b-8d515656c2d7 is in state STARTED 2025-05-29 01:11:51.201362 | orchestrator | 2025-05-29 01:11:51 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:11:51.201383 | orchestrator | 2025-05-29 01:11:51 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:11:54.248441 | orchestrator | 2025-05-29 01:11:54 | INFO  | Task 977ef4a5-1708-4276-b51b-8d515656c2d7 is in state STARTED 2025-05-29 01:11:54.249198 | orchestrator | 2025-05-29 01:11:54 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:11:54.249230 | orchestrator | 2025-05-29 01:11:54 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:11:57.302548 | orchestrator | 2025-05-29 01:11:57 | INFO  | Task 977ef4a5-1708-4276-b51b-8d515656c2d7 is in state STARTED 2025-05-29 01:11:57.303940 | orchestrator | 2025-05-29 01:11:57 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:11:57.304046 | orchestrator | 2025-05-29 01:11:57 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:12:00.350377 | orchestrator | 2025-05-29 01:12:00 | INFO  | Task 977ef4a5-1708-4276-b51b-8d515656c2d7 is in state STARTED 2025-05-29 01:12:00.350472 | orchestrator | 2025-05-29 01:12:00 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:12:00.350485 | orchestrator | 2025-05-29 01:12:00 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:12:03.396283 | orchestrator | 2025-05-29 01:12:03 | INFO  | Task 977ef4a5-1708-4276-b51b-8d515656c2d7 is in state STARTED 2025-05-29 01:12:03.398184 | orchestrator | 2025-05-29 01:12:03 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:12:03.398266 | orchestrator | 2025-05-29 01:12:03 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:12:06.452784 | orchestrator | 2025-05-29 01:12:06 | INFO  | Task 977ef4a5-1708-4276-b51b-8d515656c2d7 is in state STARTED 2025-05-29 01:12:06.454323 | orchestrator | 2025-05-29 01:12:06 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:12:06.454357 | orchestrator | 2025-05-29 01:12:06 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:12:09.505572 | orchestrator | 2025-05-29 01:12:09 | INFO  | Task 977ef4a5-1708-4276-b51b-8d515656c2d7 is in state STARTED 2025-05-29 01:12:09.507083 | orchestrator | 2025-05-29 01:12:09 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:12:09.507140 | orchestrator | 2025-05-29 01:12:09 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:12:12.555951 | orchestrator | 2025-05-29 01:12:12 | INFO  | Task 977ef4a5-1708-4276-b51b-8d515656c2d7 is in state STARTED 2025-05-29 01:12:12.557124 | orchestrator | 2025-05-29 01:12:12 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:12:12.557210 | orchestrator | 2025-05-29 01:12:12 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:12:15.604240 | orchestrator | 2025-05-29 01:12:15 | INFO  | Task 977ef4a5-1708-4276-b51b-8d515656c2d7 is in state STARTED 2025-05-29 01:12:15.607416 | orchestrator | 2025-05-29 01:12:15 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:12:15.607473 | orchestrator | 2025-05-29 01:12:15 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:12:18.650779 | orchestrator | 2025-05-29 01:12:18 | INFO  | Task 977ef4a5-1708-4276-b51b-8d515656c2d7 is in state STARTED 2025-05-29 01:12:18.652409 | orchestrator | 2025-05-29 01:12:18 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:12:18.652440 | orchestrator | 2025-05-29 01:12:18 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:12:21.705749 | orchestrator | 2025-05-29 01:12:21 | INFO  | Task 977ef4a5-1708-4276-b51b-8d515656c2d7 is in state STARTED 2025-05-29 01:12:21.706571 | orchestrator | 2025-05-29 01:12:21 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:12:21.706606 | orchestrator | 2025-05-29 01:12:21 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:12:24.762256 | orchestrator | 2025-05-29 01:12:24 | INFO  | Task 977ef4a5-1708-4276-b51b-8d515656c2d7 is in state STARTED 2025-05-29 01:12:24.762830 | orchestrator | 2025-05-29 01:12:24 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:12:24.763118 | orchestrator | 2025-05-29 01:12:24 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:12:27.811732 | orchestrator | 2025-05-29 01:12:27 | INFO  | Task 977ef4a5-1708-4276-b51b-8d515656c2d7 is in state STARTED 2025-05-29 01:12:27.812910 | orchestrator | 2025-05-29 01:12:27 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:12:27.812945 | orchestrator | 2025-05-29 01:12:27 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:12:30.867658 | orchestrator | 2025-05-29 01:12:30 | INFO  | Task 977ef4a5-1708-4276-b51b-8d515656c2d7 is in state STARTED 2025-05-29 01:12:30.870199 | orchestrator | 2025-05-29 01:12:30 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:12:30.870234 | orchestrator | 2025-05-29 01:12:30 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:12:33.922677 | orchestrator | 2025-05-29 01:12:33 | INFO  | Task 977ef4a5-1708-4276-b51b-8d515656c2d7 is in state STARTED 2025-05-29 01:12:33.923249 | orchestrator | 2025-05-29 01:12:33 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:12:33.923283 | orchestrator | 2025-05-29 01:12:33 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:12:36.980483 | orchestrator | 2025-05-29 01:12:36 | INFO  | Task 977ef4a5-1708-4276-b51b-8d515656c2d7 is in state STARTED 2025-05-29 01:12:36.980579 | orchestrator | 2025-05-29 01:12:36 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:12:36.980592 | orchestrator | 2025-05-29 01:12:36 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:12:40.045618 | orchestrator | 2025-05-29 01:12:40 | INFO  | Task 977ef4a5-1708-4276-b51b-8d515656c2d7 is in state STARTED 2025-05-29 01:12:40.045727 | orchestrator | 2025-05-29 01:12:40 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:12:40.045742 | orchestrator | 2025-05-29 01:12:40 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:12:43.092516 | orchestrator | 2025-05-29 01:12:43 | INFO  | Task 977ef4a5-1708-4276-b51b-8d515656c2d7 is in state STARTED 2025-05-29 01:12:43.094189 | orchestrator | 2025-05-29 01:12:43 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:12:43.094226 | orchestrator | 2025-05-29 01:12:43 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:12:46.147486 | orchestrator | 2025-05-29 01:12:46 | INFO  | Task 977ef4a5-1708-4276-b51b-8d515656c2d7 is in state STARTED 2025-05-29 01:12:46.148560 | orchestrator | 2025-05-29 01:12:46 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:12:46.148592 | orchestrator | 2025-05-29 01:12:46 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:12:49.196589 | orchestrator | 2025-05-29 01:12:49 | INFO  | Task 977ef4a5-1708-4276-b51b-8d515656c2d7 is in state STARTED 2025-05-29 01:12:49.199667 | orchestrator | 2025-05-29 01:12:49 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:12:49.199705 | orchestrator | 2025-05-29 01:12:49 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:12:52.259910 | orchestrator | 2025-05-29 01:12:52 | INFO  | Task 977ef4a5-1708-4276-b51b-8d515656c2d7 is in state STARTED 2025-05-29 01:12:52.262083 | orchestrator | 2025-05-29 01:12:52 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:12:52.262117 | orchestrator | 2025-05-29 01:12:52 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:12:55.324804 | orchestrator | 2025-05-29 01:12:55 | INFO  | Task 977ef4a5-1708-4276-b51b-8d515656c2d7 is in state STARTED 2025-05-29 01:12:55.326688 | orchestrator | 2025-05-29 01:12:55 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:12:55.326733 | orchestrator | 2025-05-29 01:12:55 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:12:58.382347 | orchestrator | 2025-05-29 01:12:58 | INFO  | Task 977ef4a5-1708-4276-b51b-8d515656c2d7 is in state STARTED 2025-05-29 01:12:58.385376 | orchestrator | 2025-05-29 01:12:58 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:12:58.385414 | orchestrator | 2025-05-29 01:12:58 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:13:01.438438 | orchestrator | 2025-05-29 01:13:01 | INFO  | Task 977ef4a5-1708-4276-b51b-8d515656c2d7 is in state STARTED 2025-05-29 01:13:01.441150 | orchestrator | 2025-05-29 01:13:01 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:13:01.441225 | orchestrator | 2025-05-29 01:13:01 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:13:04.496319 | orchestrator | 2025-05-29 01:13:04 | INFO  | Task 977ef4a5-1708-4276-b51b-8d515656c2d7 is in state STARTED 2025-05-29 01:13:04.499350 | orchestrator | 2025-05-29 01:13:04 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:13:04.499428 | orchestrator | 2025-05-29 01:13:04 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:13:07.555929 | orchestrator | 2025-05-29 01:13:07 | INFO  | Task 977ef4a5-1708-4276-b51b-8d515656c2d7 is in state STARTED 2025-05-29 01:13:07.556224 | orchestrator | 2025-05-29 01:13:07 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:13:07.556303 | orchestrator | 2025-05-29 01:13:07 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:13:10.621422 | orchestrator | 2025-05-29 01:13:10 | INFO  | Task 977ef4a5-1708-4276-b51b-8d515656c2d7 is in state STARTED 2025-05-29 01:13:10.622712 | orchestrator | 2025-05-29 01:13:10 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:13:10.622749 | orchestrator | 2025-05-29 01:13:10 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:13:13.674175 | orchestrator | 2025-05-29 01:13:13 | INFO  | Task 977ef4a5-1708-4276-b51b-8d515656c2d7 is in state STARTED 2025-05-29 01:13:13.677867 | orchestrator | 2025-05-29 01:13:13 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:13:13.677952 | orchestrator | 2025-05-29 01:13:13 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:13:16.722252 | orchestrator | 2025-05-29 01:13:16 | INFO  | Task 977ef4a5-1708-4276-b51b-8d515656c2d7 is in state STARTED 2025-05-29 01:13:16.723718 | orchestrator | 2025-05-29 01:13:16 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:13:16.723771 | orchestrator | 2025-05-29 01:13:16 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:13:19.777210 | orchestrator | 2025-05-29 01:13:19 | INFO  | Task 977ef4a5-1708-4276-b51b-8d515656c2d7 is in state STARTED 2025-05-29 01:13:19.779701 | orchestrator | 2025-05-29 01:13:19 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:13:19.779735 | orchestrator | 2025-05-29 01:13:19 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:13:22.830826 | orchestrator | 2025-05-29 01:13:22 | INFO  | Task 977ef4a5-1708-4276-b51b-8d515656c2d7 is in state STARTED 2025-05-29 01:13:22.833367 | orchestrator | 2025-05-29 01:13:22 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:13:22.833491 | orchestrator | 2025-05-29 01:13:22 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:13:25.890301 | orchestrator | 2025-05-29 01:13:25 | INFO  | Task 977ef4a5-1708-4276-b51b-8d515656c2d7 is in state STARTED 2025-05-29 01:13:25.891619 | orchestrator | 2025-05-29 01:13:25 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:13:25.891650 | orchestrator | 2025-05-29 01:13:25 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:13:28.939901 | orchestrator | 2025-05-29 01:13:28 | INFO  | Task 977ef4a5-1708-4276-b51b-8d515656c2d7 is in state STARTED 2025-05-29 01:13:28.940209 | orchestrator | 2025-05-29 01:13:28 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:13:28.940233 | orchestrator | 2025-05-29 01:13:28 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:13:31.977657 | orchestrator | 2025-05-29 01:13:31 | INFO  | Task 977ef4a5-1708-4276-b51b-8d515656c2d7 is in state STARTED 2025-05-29 01:13:31.979515 | orchestrator | 2025-05-29 01:13:31 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:13:31.979602 | orchestrator | 2025-05-29 01:13:31 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:13:35.023217 | orchestrator | 2025-05-29 01:13:35 | INFO  | Task 977ef4a5-1708-4276-b51b-8d515656c2d7 is in state STARTED 2025-05-29 01:13:35.023328 | orchestrator | 2025-05-29 01:13:35 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:13:35.023346 | orchestrator | 2025-05-29 01:13:35 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:13:38.072442 | orchestrator | 2025-05-29 01:13:38 | INFO  | Task 977ef4a5-1708-4276-b51b-8d515656c2d7 is in state STARTED 2025-05-29 01:13:38.073971 | orchestrator | 2025-05-29 01:13:38 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:13:38.074115 | orchestrator | 2025-05-29 01:13:38 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:13:41.129117 | orchestrator | 2025-05-29 01:13:41 | INFO  | Task 977ef4a5-1708-4276-b51b-8d515656c2d7 is in state STARTED 2025-05-29 01:13:41.130661 | orchestrator | 2025-05-29 01:13:41 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:13:41.130716 | orchestrator | 2025-05-29 01:13:41 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:13:44.182265 | orchestrator | 2025-05-29 01:13:44 | INFO  | Task 977ef4a5-1708-4276-b51b-8d515656c2d7 is in state STARTED 2025-05-29 01:13:44.184234 | orchestrator | 2025-05-29 01:13:44 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:13:44.184278 | orchestrator | 2025-05-29 01:13:44 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:13:47.233268 | orchestrator | 2025-05-29 01:13:47 | INFO  | Task 977ef4a5-1708-4276-b51b-8d515656c2d7 is in state STARTED 2025-05-29 01:13:47.235303 | orchestrator | 2025-05-29 01:13:47 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:13:47.235354 | orchestrator | 2025-05-29 01:13:47 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:13:50.285795 | orchestrator | 2025-05-29 01:13:50 | INFO  | Task 977ef4a5-1708-4276-b51b-8d515656c2d7 is in state STARTED 2025-05-29 01:13:50.285947 | orchestrator | 2025-05-29 01:13:50 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:13:50.286984 | orchestrator | 2025-05-29 01:13:50 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:13:53.320522 | orchestrator | 2025-05-29 01:13:53 | INFO  | Task 977ef4a5-1708-4276-b51b-8d515656c2d7 is in state STARTED 2025-05-29 01:13:53.320629 | orchestrator | 2025-05-29 01:13:53 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:13:53.322302 | orchestrator | 2025-05-29 01:13:53 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:13:56.352010 | orchestrator | 2025-05-29 01:13:56 | INFO  | Task 977ef4a5-1708-4276-b51b-8d515656c2d7 is in state STARTED 2025-05-29 01:13:56.352456 | orchestrator | 2025-05-29 01:13:56 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:13:56.352505 | orchestrator | 2025-05-29 01:13:56 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:13:59.379365 | orchestrator | 2025-05-29 01:13:59 | INFO  | Task 977ef4a5-1708-4276-b51b-8d515656c2d7 is in state STARTED 2025-05-29 01:13:59.379474 | orchestrator | 2025-05-29 01:13:59 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:13:59.379491 | orchestrator | 2025-05-29 01:13:59 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:14:02.411608 | orchestrator | 2025-05-29 01:14:02 | INFO  | Task 977ef4a5-1708-4276-b51b-8d515656c2d7 is in state STARTED 2025-05-29 01:14:02.413848 | orchestrator | 2025-05-29 01:14:02 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:14:02.413883 | orchestrator | 2025-05-29 01:14:02 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:14:05.464156 | orchestrator | 2025-05-29 01:14:05 | INFO  | Task eaec5ab4-9f32-4193-9257-408f891240f8 is in state STARTED 2025-05-29 01:14:05.466871 | orchestrator | 2025-05-29 01:14:05 | INFO  | Task 977ef4a5-1708-4276-b51b-8d515656c2d7 is in state STARTED 2025-05-29 01:14:05.467865 | orchestrator | 2025-05-29 01:14:05 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:14:05.468029 | orchestrator | 2025-05-29 01:14:05 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:14:08.527362 | orchestrator | 2025-05-29 01:14:08 | INFO  | Task eaec5ab4-9f32-4193-9257-408f891240f8 is in state STARTED 2025-05-29 01:14:08.529474 | orchestrator | 2025-05-29 01:14:08 | INFO  | Task 977ef4a5-1708-4276-b51b-8d515656c2d7 is in state STARTED 2025-05-29 01:14:08.531040 | orchestrator | 2025-05-29 01:14:08 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:14:08.531783 | orchestrator | 2025-05-29 01:14:08 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:14:11.586596 | orchestrator | 2025-05-29 01:14:11 | INFO  | Task eaec5ab4-9f32-4193-9257-408f891240f8 is in state STARTED 2025-05-29 01:14:11.588694 | orchestrator | 2025-05-29 01:14:11 | INFO  | Task 977ef4a5-1708-4276-b51b-8d515656c2d7 is in state STARTED 2025-05-29 01:14:11.589591 | orchestrator | 2025-05-29 01:14:11 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:14:11.589638 | orchestrator | 2025-05-29 01:14:11 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:14:14.655037 | orchestrator | 2025-05-29 01:14:14 | INFO  | Task eaec5ab4-9f32-4193-9257-408f891240f8 is in state STARTED 2025-05-29 01:14:14.656296 | orchestrator | 2025-05-29 01:14:14 | INFO  | Task 977ef4a5-1708-4276-b51b-8d515656c2d7 is in state STARTED 2025-05-29 01:14:14.658514 | orchestrator | 2025-05-29 01:14:14 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:14:14.658592 | orchestrator | 2025-05-29 01:14:14 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:14:17.701627 | orchestrator | 2025-05-29 01:14:17 | INFO  | Task eaec5ab4-9f32-4193-9257-408f891240f8 is in state SUCCESS 2025-05-29 01:14:17.703069 | orchestrator | 2025-05-29 01:14:17 | INFO  | Task 977ef4a5-1708-4276-b51b-8d515656c2d7 is in state STARTED 2025-05-29 01:14:17.704884 | orchestrator | 2025-05-29 01:14:17 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:14:17.704913 | orchestrator | 2025-05-29 01:14:17 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:14:20.753784 | orchestrator | 2025-05-29 01:14:20 | INFO  | Task 977ef4a5-1708-4276-b51b-8d515656c2d7 is in state STARTED 2025-05-29 01:14:20.754776 | orchestrator | 2025-05-29 01:14:20 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:14:20.754807 | orchestrator | 2025-05-29 01:14:20 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:14:23.800615 | orchestrator | 2025-05-29 01:14:23 | INFO  | Task 977ef4a5-1708-4276-b51b-8d515656c2d7 is in state STARTED 2025-05-29 01:14:23.801370 | orchestrator | 2025-05-29 01:14:23 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:14:23.801404 | orchestrator | 2025-05-29 01:14:23 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:14:26.867018 | orchestrator | 2025-05-29 01:14:26 | INFO  | Task 977ef4a5-1708-4276-b51b-8d515656c2d7 is in state STARTED 2025-05-29 01:14:26.870112 | orchestrator | 2025-05-29 01:14:26 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:14:26.870203 | orchestrator | 2025-05-29 01:14:26 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:14:29.928681 | orchestrator | 2025-05-29 01:14:29 | INFO  | Task 977ef4a5-1708-4276-b51b-8d515656c2d7 is in state STARTED 2025-05-29 01:14:29.929413 | orchestrator | 2025-05-29 01:14:29 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:14:29.929513 | orchestrator | 2025-05-29 01:14:29 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:14:32.986510 | orchestrator | 2025-05-29 01:14:32.986614 | orchestrator | None 2025-05-29 01:14:32.986630 | orchestrator | 2025-05-29 01:14:32 | INFO  | Task 977ef4a5-1708-4276-b51b-8d515656c2d7 is in state SUCCESS 2025-05-29 01:14:32.988390 | orchestrator | 2025-05-29 01:14:32.988422 | orchestrator | PLAY [Group hosts based on configuration] ************************************** 2025-05-29 01:14:32.988435 | orchestrator | 2025-05-29 01:14:32.988446 | orchestrator | TASK [Group hosts based on OpenStack release] ********************************** 2025-05-29 01:14:32.988486 | orchestrator | Thursday 29 May 2025 01:06:21 +0000 (0:00:00.411) 0:00:00.411 ********** 2025-05-29 01:14:32.988498 | orchestrator | changed: [testbed-manager] 2025-05-29 01:14:32.988574 | orchestrator | changed: [testbed-node-0] 2025-05-29 01:14:32.988588 | orchestrator | changed: [testbed-node-1] 2025-05-29 01:14:32.988641 | orchestrator | changed: [testbed-node-2] 2025-05-29 01:14:32.988653 | orchestrator | changed: [testbed-node-3] 2025-05-29 01:14:32.988677 | orchestrator | changed: [testbed-node-4] 2025-05-29 01:14:32.988688 | orchestrator | changed: [testbed-node-5] 2025-05-29 01:14:32.988699 | orchestrator | 2025-05-29 01:14:32.988710 | orchestrator | TASK [Group hosts based on Kolla action] *************************************** 2025-05-29 01:14:32.988722 | orchestrator | Thursday 29 May 2025 01:06:22 +0000 (0:00:01.602) 0:00:02.014 ********** 2025-05-29 01:14:32.988733 | orchestrator | changed: [testbed-manager] 2025-05-29 01:14:32.988743 | orchestrator | changed: [testbed-node-0] 2025-05-29 01:14:32.988754 | orchestrator | changed: [testbed-node-1] 2025-05-29 01:14:32.988765 | orchestrator | changed: [testbed-node-2] 2025-05-29 01:14:32.988776 | orchestrator | changed: [testbed-node-3] 2025-05-29 01:14:32.988786 | orchestrator | changed: [testbed-node-4] 2025-05-29 01:14:32.988797 | orchestrator | changed: [testbed-node-5] 2025-05-29 01:14:32.988808 | orchestrator | 2025-05-29 01:14:32.988819 | orchestrator | TASK [Group hosts based on enabled services] *********************************** 2025-05-29 01:14:32.988844 | orchestrator | Thursday 29 May 2025 01:06:23 +0000 (0:00:01.202) 0:00:03.217 ********** 2025-05-29 01:14:32.988856 | orchestrator | changed: [testbed-manager] => (item=enable_nova_True) 2025-05-29 01:14:32.988867 | orchestrator | changed: [testbed-node-0] => (item=enable_nova_True) 2025-05-29 01:14:32.988879 | orchestrator | changed: [testbed-node-1] => (item=enable_nova_True) 2025-05-29 01:14:32.988902 | orchestrator | changed: [testbed-node-2] => (item=enable_nova_True) 2025-05-29 01:14:32.988913 | orchestrator | changed: [testbed-node-3] => (item=enable_nova_True) 2025-05-29 01:14:32.988950 | orchestrator | changed: [testbed-node-4] => (item=enable_nova_True) 2025-05-29 01:14:32.988961 | orchestrator | changed: [testbed-node-5] => (item=enable_nova_True) 2025-05-29 01:14:32.988972 | orchestrator | 2025-05-29 01:14:32.988983 | orchestrator | PLAY [Bootstrap nova API databases] ******************************************** 2025-05-29 01:14:32.988995 | orchestrator | 2025-05-29 01:14:32.989008 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2025-05-29 01:14:32.989020 | orchestrator | Thursday 29 May 2025 01:06:24 +0000 (0:00:00.943) 0:00:04.160 ********** 2025-05-29 01:14:32.989033 | orchestrator | included: nova for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-29 01:14:32.989046 | orchestrator | 2025-05-29 01:14:32.989058 | orchestrator | TASK [nova : Creating Nova databases] ****************************************** 2025-05-29 01:14:32.989071 | orchestrator | Thursday 29 May 2025 01:06:25 +0000 (0:00:00.918) 0:00:05.078 ********** 2025-05-29 01:14:32.989084 | orchestrator | changed: [testbed-node-0] => (item=nova_cell0) 2025-05-29 01:14:32.989097 | orchestrator | changed: [testbed-node-0] => (item=nova_api) 2025-05-29 01:14:32.989110 | orchestrator | 2025-05-29 01:14:32.989122 | orchestrator | TASK [nova : Creating Nova databases user and setting permissions] ************* 2025-05-29 01:14:32.989135 | orchestrator | Thursday 29 May 2025 01:06:30 +0000 (0:00:04.516) 0:00:09.595 ********** 2025-05-29 01:14:32.989147 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-05-29 01:14:32.989160 | orchestrator | changed: [testbed-node-0] => (item=None) 2025-05-29 01:14:32.989173 | orchestrator | changed: [testbed-node-0] 2025-05-29 01:14:32.989186 | orchestrator | 2025-05-29 01:14:32.989199 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2025-05-29 01:14:32.989211 | orchestrator | Thursday 29 May 2025 01:06:34 +0000 (0:00:04.367) 0:00:13.963 ********** 2025-05-29 01:14:32.989223 | orchestrator | changed: [testbed-node-0] 2025-05-29 01:14:32.989256 | orchestrator | 2025-05-29 01:14:32.989269 | orchestrator | TASK [nova : Copying over config.json files for nova-api-bootstrap] ************ 2025-05-29 01:14:32.989281 | orchestrator | Thursday 29 May 2025 01:06:35 +0000 (0:00:00.726) 0:00:14.689 ********** 2025-05-29 01:14:32.989303 | orchestrator | changed: [testbed-node-0] 2025-05-29 01:14:32.989316 | orchestrator | 2025-05-29 01:14:32.989329 | orchestrator | TASK [nova : Copying over nova.conf for nova-api-bootstrap] ******************** 2025-05-29 01:14:32.989342 | orchestrator | Thursday 29 May 2025 01:06:37 +0000 (0:00:01.621) 0:00:16.311 ********** 2025-05-29 01:14:32.989353 | orchestrator | changed: [testbed-node-0] 2025-05-29 01:14:32.989364 | orchestrator | 2025-05-29 01:14:32.989375 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-05-29 01:14:32.989386 | orchestrator | Thursday 29 May 2025 01:06:40 +0000 (0:00:03.669) 0:00:19.981 ********** 2025-05-29 01:14:32.989397 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:14:32.989408 | orchestrator | skipping: [testbed-node-1] 2025-05-29 01:14:32.989418 | orchestrator | skipping: [testbed-node-2] 2025-05-29 01:14:32.989429 | orchestrator | 2025-05-29 01:14:32.989440 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2025-05-29 01:14:32.989451 | orchestrator | Thursday 29 May 2025 01:06:41 +0000 (0:00:01.092) 0:00:21.074 ********** 2025-05-29 01:14:32.989461 | orchestrator | ok: [testbed-node-0] 2025-05-29 01:14:32.989472 | orchestrator | 2025-05-29 01:14:32.989483 | orchestrator | TASK [nova : Create cell0 mappings] ******************************************** 2025-05-29 01:14:32.989494 | orchestrator | Thursday 29 May 2025 01:07:11 +0000 (0:00:29.288) 0:00:50.362 ********** 2025-05-29 01:14:32.989505 | orchestrator | changed: [testbed-node-0] 2025-05-29 01:14:32.989516 | orchestrator | 2025-05-29 01:14:32.989527 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-05-29 01:14:32.989537 | orchestrator | Thursday 29 May 2025 01:07:24 +0000 (0:00:13.504) 0:01:03.867 ********** 2025-05-29 01:14:32.989548 | orchestrator | ok: [testbed-node-0] 2025-05-29 01:14:32.989559 | orchestrator | 2025-05-29 01:14:32.989570 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-05-29 01:14:32.989581 | orchestrator | Thursday 29 May 2025 01:07:36 +0000 (0:00:12.036) 0:01:15.903 ********** 2025-05-29 01:14:32.989605 | orchestrator | ok: [testbed-node-0] 2025-05-29 01:14:32.989617 | orchestrator | 2025-05-29 01:14:32.989628 | orchestrator | TASK [nova : Update cell0 mappings] ******************************************** 2025-05-29 01:14:32.989640 | orchestrator | Thursday 29 May 2025 01:07:37 +0000 (0:00:00.953) 0:01:16.856 ********** 2025-05-29 01:14:32.989650 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:14:32.989661 | orchestrator | 2025-05-29 01:14:32.989672 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-05-29 01:14:32.989683 | orchestrator | Thursday 29 May 2025 01:07:38 +0000 (0:00:00.617) 0:01:17.474 ********** 2025-05-29 01:14:32.989694 | orchestrator | included: /ansible/roles/nova/tasks/bootstrap_service.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-29 01:14:32.989705 | orchestrator | 2025-05-29 01:14:32.989716 | orchestrator | TASK [nova : Running Nova API bootstrap container] ***************************** 2025-05-29 01:14:32.989726 | orchestrator | Thursday 29 May 2025 01:07:39 +0000 (0:00:00.788) 0:01:18.263 ********** 2025-05-29 01:14:32.989737 | orchestrator | ok: [testbed-node-0] 2025-05-29 01:14:32.989748 | orchestrator | 2025-05-29 01:14:32.989759 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2025-05-29 01:14:32.989769 | orchestrator | Thursday 29 May 2025 01:07:54 +0000 (0:00:15.884) 0:01:34.148 ********** 2025-05-29 01:14:32.989780 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:14:32.989791 | orchestrator | skipping: [testbed-node-1] 2025-05-29 01:14:32.989802 | orchestrator | skipping: [testbed-node-2] 2025-05-29 01:14:32.989813 | orchestrator | 2025-05-29 01:14:32.989824 | orchestrator | PLAY [Bootstrap nova cell databases] ******************************************* 2025-05-29 01:14:32.989841 | orchestrator | 2025-05-29 01:14:32.989852 | orchestrator | TASK [Bootstrap deploy] ******************************************************** 2025-05-29 01:14:32.989863 | orchestrator | Thursday 29 May 2025 01:07:55 +0000 (0:00:00.267) 0:01:34.415 ********** 2025-05-29 01:14:32.989874 | orchestrator | included: nova-cell for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-29 01:14:32.989892 | orchestrator | 2025-05-29 01:14:32.989903 | orchestrator | TASK [nova-cell : Creating Nova cell database] ********************************* 2025-05-29 01:14:32.989914 | orchestrator | Thursday 29 May 2025 01:07:55 +0000 (0:00:00.719) 0:01:35.135 ********** 2025-05-29 01:14:32.989925 | orchestrator | skipping: [testbed-node-1] 2025-05-29 01:14:32.989936 | orchestrator | skipping: [testbed-node-2] 2025-05-29 01:14:32.989947 | orchestrator | changed: [testbed-node-0] 2025-05-29 01:14:32.989958 | orchestrator | 2025-05-29 01:14:32.989969 | orchestrator | TASK [nova-cell : Creating Nova cell database user and setting permissions] **** 2025-05-29 01:14:32.989980 | orchestrator | Thursday 29 May 2025 01:07:58 +0000 (0:00:02.602) 0:01:37.738 ********** 2025-05-29 01:14:32.989990 | orchestrator | skipping: [testbed-node-1] 2025-05-29 01:14:32.990001 | orchestrator | skipping: [testbed-node-2] 2025-05-29 01:14:32.990012 | orchestrator | changed: [testbed-node-0] 2025-05-29 01:14:32.990082 | orchestrator | 2025-05-29 01:14:32.990093 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2025-05-29 01:14:32.990104 | orchestrator | Thursday 29 May 2025 01:08:00 +0000 (0:00:02.213) 0:01:39.952 ********** 2025-05-29 01:14:32.990115 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:14:32.990126 | orchestrator | skipping: [testbed-node-1] 2025-05-29 01:14:32.990136 | orchestrator | skipping: [testbed-node-2] 2025-05-29 01:14:32.990147 | orchestrator | 2025-05-29 01:14:32.990158 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2025-05-29 01:14:32.990169 | orchestrator | Thursday 29 May 2025 01:08:01 +0000 (0:00:00.496) 0:01:40.448 ********** 2025-05-29 01:14:32.990179 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-05-29 01:14:32.990190 | orchestrator | skipping: [testbed-node-1] 2025-05-29 01:14:32.990201 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-05-29 01:14:32.990212 | orchestrator | skipping: [testbed-node-2] 2025-05-29 01:14:32.990222 | orchestrator | ok: [testbed-node-0] => (item=None) 2025-05-29 01:14:32.990253 | orchestrator | ok: [testbed-node-0 -> {{ service_rabbitmq_delegate_host }}] 2025-05-29 01:14:32.990264 | orchestrator | 2025-05-29 01:14:32.990275 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ vhosts exist] ****************** 2025-05-29 01:14:32.990286 | orchestrator | Thursday 29 May 2025 01:08:09 +0000 (0:00:08.394) 0:01:48.843 ********** 2025-05-29 01:14:32.990297 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:14:32.990307 | orchestrator | skipping: [testbed-node-1] 2025-05-29 01:14:32.990318 | orchestrator | skipping: [testbed-node-2] 2025-05-29 01:14:32.990329 | orchestrator | 2025-05-29 01:14:32.990339 | orchestrator | TASK [service-rabbitmq : nova | Ensure RabbitMQ users exist] ******************* 2025-05-29 01:14:32.990350 | orchestrator | Thursday 29 May 2025 01:08:10 +0000 (0:00:00.638) 0:01:49.481 ********** 2025-05-29 01:14:32.990361 | orchestrator | skipping: [testbed-node-0] => (item=None)  2025-05-29 01:14:32.990372 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:14:32.990383 | orchestrator | skipping: [testbed-node-1] => (item=None)  2025-05-29 01:14:32.990393 | orchestrator | skipping: [testbed-node-1] 2025-05-29 01:14:32.990404 | orchestrator | skipping: [testbed-node-2] => (item=None)  2025-05-29 01:14:32.990415 | orchestrator | skipping: [testbed-node-2] 2025-05-29 01:14:32.990425 | orchestrator | 2025-05-29 01:14:32.990436 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2025-05-29 01:14:32.990447 | orchestrator | Thursday 29 May 2025 01:08:11 +0000 (0:00:01.098) 0:01:50.579 ********** 2025-05-29 01:14:32.990457 | orchestrator | skipping: [testbed-node-1] 2025-05-29 01:14:32.990468 | orchestrator | skipping: [testbed-node-2] 2025-05-29 01:14:32.990479 | orchestrator | changed: [testbed-node-0] 2025-05-29 01:14:32.990489 | orchestrator | 2025-05-29 01:14:32.990500 | orchestrator | TASK [nova-cell : Copying over config.json files for nova-cell-bootstrap] ****** 2025-05-29 01:14:32.990511 | orchestrator | Thursday 29 May 2025 01:08:11 +0000 (0:00:00.524) 0:01:51.104 ********** 2025-05-29 01:14:32.990521 | orchestrator | skipping: [testbed-node-1] 2025-05-29 01:14:32.990532 | orchestrator | skipping: [testbed-node-2] 2025-05-29 01:14:32.990551 | orchestrator | changed: [testbed-node-0] 2025-05-29 01:14:32.990562 | orchestrator | 2025-05-29 01:14:32.990573 | orchestrator | TASK [nova-cell : Copying over nova.conf for nova-cell-bootstrap] ************** 2025-05-29 01:14:32.990584 | orchestrator | Thursday 29 May 2025 01:08:12 +0000 (0:00:00.956) 0:01:52.060 ********** 2025-05-29 01:14:32.990595 | orchestrator | skipping: [testbed-node-1] 2025-05-29 01:14:32.990606 | orchestrator | skipping: [testbed-node-2] 2025-05-29 01:14:32.990623 | orchestrator | changed: [testbed-node-0] 2025-05-29 01:14:32.990634 | orchestrator | 2025-05-29 01:14:32.990645 | orchestrator | TASK [nova-cell : Running Nova cell bootstrap container] *********************** 2025-05-29 01:14:32.990656 | orchestrator | Thursday 29 May 2025 01:08:15 +0000 (0:00:02.206) 0:01:54.267 ********** 2025-05-29 01:14:32.990667 | orchestrator | skipping: [testbed-node-1] 2025-05-29 01:14:32.990678 | orchestrator | skipping: [testbed-node-2] 2025-05-29 01:14:32.990689 | orchestrator | ok: [testbed-node-0] 2025-05-29 01:14:32.990700 | orchestrator | 2025-05-29 01:14:32.990711 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-05-29 01:14:32.990722 | orchestrator | Thursday 29 May 2025 01:08:34 +0000 (0:00:19.118) 0:02:13.386 ********** 2025-05-29 01:14:32.990732 | orchestrator | skipping: [testbed-node-1] 2025-05-29 01:14:32.990743 | orchestrator | skipping: [testbed-node-2] 2025-05-29 01:14:32.990754 | orchestrator | ok: [testbed-node-0] 2025-05-29 01:14:32.990765 | orchestrator | 2025-05-29 01:14:32.990776 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-05-29 01:14:32.990786 | orchestrator | Thursday 29 May 2025 01:08:44 +0000 (0:00:10.107) 0:02:23.493 ********** 2025-05-29 01:14:32.990797 | orchestrator | ok: [testbed-node-0] 2025-05-29 01:14:32.990808 | orchestrator | skipping: [testbed-node-1] 2025-05-29 01:14:32.990819 | orchestrator | skipping: [testbed-node-2] 2025-05-29 01:14:32.990829 | orchestrator | 2025-05-29 01:14:32.990840 | orchestrator | TASK [nova-cell : Create cell] ************************************************* 2025-05-29 01:14:32.990851 | orchestrator | Thursday 29 May 2025 01:08:45 +0000 (0:00:01.460) 0:02:24.953 ********** 2025-05-29 01:14:32.990950 | orchestrator | skipping: [testbed-node-2] 2025-05-29 01:14:32.991005 | orchestrator | skipping: [testbed-node-1] 2025-05-29 01:14:32.991049 | orchestrator | changed: [testbed-node-0] 2025-05-29 01:14:32.991060 | orchestrator | 2025-05-29 01:14:32.991108 | orchestrator | TASK [nova-cell : Update cell] ************************************************* 2025-05-29 01:14:32.991121 | orchestrator | Thursday 29 May 2025 01:08:56 +0000 (0:00:10.364) 0:02:35.317 ********** 2025-05-29 01:14:32.991132 | orchestrator | skipping: [testbed-node-1] 2025-05-29 01:14:32.991143 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:14:32.991153 | orchestrator | skipping: [testbed-node-2] 2025-05-29 01:14:32.991164 | orchestrator | 2025-05-29 01:14:32.991175 | orchestrator | TASK [Bootstrap upgrade] ******************************************************* 2025-05-29 01:14:32.991186 | orchestrator | Thursday 29 May 2025 01:08:57 +0000 (0:00:01.464) 0:02:36.782 ********** 2025-05-29 01:14:32.991196 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:14:32.991207 | orchestrator | skipping: [testbed-node-1] 2025-05-29 01:14:32.991218 | orchestrator | skipping: [testbed-node-2] 2025-05-29 01:14:32.991281 | orchestrator | 2025-05-29 01:14:32.991295 | orchestrator | PLAY [Apply role nova] ********************************************************* 2025-05-29 01:14:32.991306 | orchestrator | 2025-05-29 01:14:32.991317 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-05-29 01:14:32.991328 | orchestrator | Thursday 29 May 2025 01:08:58 +0000 (0:00:00.478) 0:02:37.261 ********** 2025-05-29 01:14:32.991338 | orchestrator | included: /ansible/roles/nova/tasks/deploy.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-29 01:14:32.991350 | orchestrator | 2025-05-29 01:14:32.991361 | orchestrator | TASK [service-ks-register : nova | Creating services] ************************** 2025-05-29 01:14:32.991372 | orchestrator | Thursday 29 May 2025 01:08:58 +0000 (0:00:00.623) 0:02:37.884 ********** 2025-05-29 01:14:32.991383 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy (compute_legacy))  2025-05-29 01:14:32.991402 | orchestrator | changed: [testbed-node-0] => (item=nova (compute)) 2025-05-29 01:14:32.991429 | orchestrator | 2025-05-29 01:14:32.991441 | orchestrator | TASK [service-ks-register : nova | Creating endpoints] ************************* 2025-05-29 01:14:32.991451 | orchestrator | Thursday 29 May 2025 01:09:02 +0000 (0:00:03.401) 0:02:41.285 ********** 2025-05-29 01:14:32.991474 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api-int.testbed.osism.xyz:8774/v2/%(tenant_id)s -> internal)  2025-05-29 01:14:32.991487 | orchestrator | skipping: [testbed-node-0] => (item=nova_legacy -> https://api.testbed.osism.xyz:8774/v2/%(tenant_id)s -> public)  2025-05-29 01:14:32.991498 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api-int.testbed.osism.xyz:8774/v2.1 -> internal) 2025-05-29 01:14:32.991551 | orchestrator | changed: [testbed-node-0] => (item=nova -> https://api.testbed.osism.xyz:8774/v2.1 -> public) 2025-05-29 01:14:32.991562 | orchestrator | 2025-05-29 01:14:32.991573 | orchestrator | TASK [service-ks-register : nova | Creating projects] ************************** 2025-05-29 01:14:32.991584 | orchestrator | Thursday 29 May 2025 01:09:08 +0000 (0:00:06.333) 0:02:47.619 ********** 2025-05-29 01:14:32.991594 | orchestrator | ok: [testbed-node-0] => (item=service) 2025-05-29 01:14:32.991604 | orchestrator | 2025-05-29 01:14:32.991613 | orchestrator | TASK [service-ks-register : nova | Creating users] ***************************** 2025-05-29 01:14:32.991623 | orchestrator | Thursday 29 May 2025 01:09:11 +0000 (0:00:03.303) 0:02:50.922 ********** 2025-05-29 01:14:32.991633 | orchestrator | [WARNING]: Module did not set no_log for update_password 2025-05-29 01:14:32.991642 | orchestrator | changed: [testbed-node-0] => (item=nova -> service) 2025-05-29 01:14:32.991652 | orchestrator | 2025-05-29 01:14:32.991662 | orchestrator | TASK [service-ks-register : nova | Creating roles] ***************************** 2025-05-29 01:14:32.991703 | orchestrator | Thursday 29 May 2025 01:09:15 +0000 (0:00:03.866) 0:02:54.789 ********** 2025-05-29 01:14:32.991713 | orchestrator | ok: [testbed-node-0] => (item=admin) 2025-05-29 01:14:32.991723 | orchestrator | 2025-05-29 01:14:32.991732 | orchestrator | TASK [service-ks-register : nova | Granting user roles] ************************ 2025-05-29 01:14:32.991742 | orchestrator | Thursday 29 May 2025 01:09:18 +0000 (0:00:03.360) 0:02:58.150 ********** 2025-05-29 01:14:32.991751 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> admin) 2025-05-29 01:14:32.991761 | orchestrator | changed: [testbed-node-0] => (item=nova -> service -> service) 2025-05-29 01:14:32.991771 | orchestrator | 2025-05-29 01:14:32.991780 | orchestrator | TASK [nova : Ensuring config directories exist] ******************************** 2025-05-29 01:14:32.991798 | orchestrator | Thursday 29 May 2025 01:09:27 +0000 (0:00:08.348) 0:03:06.499 ********** 2025-05-29 01:14:32.991936 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-29 01:14:32.991965 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-29 01:14:32.991985 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-29 01:14:32.991997 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-29 01:14:32.992018 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-29 01:14:32.992034 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-29 01:14:32.992051 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-29 01:14:32.992061 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-29 01:14:32.992071 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-29 01:14:32.992082 | orchestrator | 2025-05-29 01:14:32.992092 | orchestrator | TASK [nova : Check if policies shall be overwritten] *************************** 2025-05-29 01:14:32.992102 | orchestrator | Thursday 29 May 2025 01:09:28 +0000 (0:00:01.537) 0:03:08.037 ********** 2025-05-29 01:14:32.992112 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:14:32.992122 | orchestrator | 2025-05-29 01:14:32.992131 | orchestrator | TASK [nova : Set nova policy file] ********************************************* 2025-05-29 01:14:32.992141 | orchestrator | Thursday 29 May 2025 01:09:29 +0000 (0:00:00.284) 0:03:08.322 ********** 2025-05-29 01:14:32.992150 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:14:32.992160 | orchestrator | skipping: [testbed-node-1] 2025-05-29 01:14:32.992170 | orchestrator | skipping: [testbed-node-2] 2025-05-29 01:14:32.992179 | orchestrator | 2025-05-29 01:14:32.992189 | orchestrator | TASK [nova : Check for vendordata file] **************************************** 2025-05-29 01:14:32.992198 | orchestrator | Thursday 29 May 2025 01:09:29 +0000 (0:00:00.313) 0:03:08.635 ********** 2025-05-29 01:14:32.992208 | orchestrator | ok: [testbed-node-0 -> localhost] 2025-05-29 01:14:32.992218 | orchestrator | 2025-05-29 01:14:32.992254 | orchestrator | TASK [nova : Set vendordata file path] ***************************************** 2025-05-29 01:14:32.992265 | orchestrator | Thursday 29 May 2025 01:09:29 +0000 (0:00:00.564) 0:03:09.199 ********** 2025-05-29 01:14:32.992274 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:14:32.992284 | orchestrator | skipping: [testbed-node-1] 2025-05-29 01:14:32.992293 | orchestrator | skipping: [testbed-node-2] 2025-05-29 01:14:32.992303 | orchestrator | 2025-05-29 01:14:32.992313 | orchestrator | TASK [nova : include_tasks] **************************************************** 2025-05-29 01:14:32.992322 | orchestrator | Thursday 29 May 2025 01:09:30 +0000 (0:00:00.310) 0:03:09.509 ********** 2025-05-29 01:14:32.992332 | orchestrator | included: /ansible/roles/nova/tasks/copy-certs.yml for testbed-node-0, testbed-node-1, testbed-node-2 2025-05-29 01:14:32.992348 | orchestrator | 2025-05-29 01:14:32.992358 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2025-05-29 01:14:32.992367 | orchestrator | Thursday 29 May 2025 01:09:31 +0000 (0:00:00.812) 0:03:10.322 ********** 2025-05-29 01:14:32.992382 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-29 01:14:32.992395 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-29 01:14:32.992414 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-29 01:14:32.992431 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-29 01:14:32.992446 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-29 01:14:32.992457 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-29 01:14:32.992467 | orchestrator | 2025-05-29 01:14:32.992477 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2025-05-29 01:14:32.992487 | orchestrator | Thursday 29 May 2025 01:09:33 +0000 (0:00:02.599) 0:03:12.921 ********** 2025-05-29 01:14:32.992497 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-29 01:14:32.992508 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-29 01:14:32.992523 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:14:32.992539 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-29 01:14:32.992556 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-29 01:14:32.992566 | orchestrator | skipping: [testbed-node-1] 2025-05-29 01:14:32.992577 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-29 01:14:32.992588 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-29 01:14:32.992598 | orchestrator | skipping: [testbed-node-2] 2025-05-29 01:14:32.992608 | orchestrator | 2025-05-29 01:14:32.992617 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2025-05-29 01:14:32.992627 | orchestrator | Thursday 29 May 2025 01:09:34 +0000 (0:00:00.598) 0:03:13.520 ********** 2025-05-29 01:14:32.992656 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-29 01:14:32.992668 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-29 01:14:32.992678 | orchestrator | skipping: [testbed-node-1] 2025-05-29 01:14:32.992688 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-29 01:14:32.992699 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-29 01:14:32.992709 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:14:32.992727 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-29 01:14:32.992752 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-29 01:14:32.992763 | orchestrator | skipping: [testbed-node-2] 2025-05-29 01:14:32.992772 | orchestrator | 2025-05-29 01:14:32.992782 | orchestrator | TASK [nova : Copying over config.json files for services] ********************** 2025-05-29 01:14:32.992792 | orchestrator | Thursday 29 May 2025 01:09:35 +0000 (0:00:01.169) 0:03:14.690 ********** 2025-05-29 01:14:32.992803 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-29 01:14:32.992814 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-29 01:14:32.992843 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-29 01:14:32.992855 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-29 01:14:32.992865 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-29 01:14:32.992876 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-29 01:14:32.992900 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-29 01:14:32.992933 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-29 01:14:32.992945 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-29 01:14:32.992955 | orchestrator | 2025-05-29 01:14:32.992965 | orchestrator | TASK [nova : Copying over nova.conf] ******************************************* 2025-05-29 01:14:32.992974 | orchestrator | Thursday 29 May 2025 01:09:38 +0000 (0:00:02.683) 0:03:17.373 ********** 2025-05-29 01:14:32.992989 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-29 01:14:32.993001 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-29 01:14:32.993025 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-29 01:14:32.993041 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-29 01:14:32.993051 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-29 01:14:32.993061 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-29 01:14:32.993071 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-29 01:14:32.993081 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-29 01:14:32.993103 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-29 01:14:32.993113 | orchestrator | 2025-05-29 01:14:32.993123 | orchestrator | TASK [nova : Copying over existing policy file] ******************************** 2025-05-29 01:14:32.993133 | orchestrator | Thursday 29 May 2025 01:09:44 +0000 (0:00:06.046) 0:03:23.420 ********** 2025-05-29 01:14:32.993148 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-29 01:14:32.993159 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-29 01:14:32.993169 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-29 01:14:32.993179 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:14:32.993196 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-29 01:14:32.993213 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-29 01:14:32.993228 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-29 01:14:32.993292 | orchestrator | skipping: [testbed-node-1] 2025-05-29 01:14:32.993303 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}})  2025-05-29 01:14:32.993314 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}})  2025-05-29 01:14:32.993331 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-29 01:14:32.993341 | orchestrator | skipping: [testbed-node-2] 2025-05-29 01:14:32.993351 | orchestrator | 2025-05-29 01:14:32.993361 | orchestrator | TASK [nova : Copying over nova-api-wsgi.conf] ********************************** 2025-05-29 01:14:32.993371 | orchestrator | Thursday 29 May 2025 01:09:45 +0000 (0:00:00.871) 0:03:24.291 ********** 2025-05-29 01:14:32.993381 | orchestrator | changed: [testbed-node-0] 2025-05-29 01:14:32.993391 | orchestrator | changed: [testbed-node-2] 2025-05-29 01:14:32.993400 | orchestrator | changed: [testbed-node-1] 2025-05-29 01:14:32.993410 | orchestrator | 2025-05-29 01:14:32.993419 | orchestrator | TASK [nova : Copying over vendordata file] ************************************* 2025-05-29 01:14:32.993429 | orchestrator | Thursday 29 May 2025 01:09:46 +0000 (0:00:01.789) 0:03:26.081 ********** 2025-05-29 01:14:32.993444 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:14:32.993455 | orchestrator | skipping: [testbed-node-1] 2025-05-29 01:14:32.993464 | orchestrator | skipping: [testbed-node-2] 2025-05-29 01:14:32.993474 | orchestrator | 2025-05-29 01:14:32.993484 | orchestrator | TASK [nova : Check nova containers] ******************************************** 2025-05-29 01:14:32.993493 | orchestrator | Thursday 29 May 2025 01:09:47 +0000 (0:00:00.534) 0:03:26.616 ********** 2025-05-29 01:14:32.993508 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-29 01:14:32.993520 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-29 01:14:32.993536 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-api', 'value': {'container_name': 'nova_api', 'group': 'nova-api', 'image': 'registry.osism.tech/kolla/release/nova-api:29.2.1.20241206', 'enabled': True, 'privileged': True, 'volumes': ['/etc/kolla/nova-api/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:8774 '], 'timeout': '30'}, 'haproxy': {'nova_api': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_api_external': {'enabled': True, 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8774', 'listen_port': '8774', 'tls_backend': 'no'}, 'nova_metadata': {'enabled': True, 'mode': 'http', 'external': False, 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}, 'nova_metadata_external': {'enabled': 'no', 'mode': 'http', 'external': True, 'external_fqdn': 'api.testbed.osism.xyz', 'port': '8775', 'listen_port': '8775', 'tls_backend': 'no'}}}}) 2025-05-29 01:14:32.993951 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-29 01:14:32.993973 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-29 01:14:32.993982 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-29 01:14:32.993991 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-29 01:14:32.994010 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-scheduler', 'value': {'container_name': 'nova_scheduler', 'group': 'nova-scheduler', 'image': 'registry.osism.tech/kolla/release/nova-scheduler:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-scheduler/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-scheduler 5672'], 'timeout': '30'}}}) 2025-05-29 01:14:32.994066 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-super-conductor', 'value': {'container_name': 'nova_super_conductor', 'group': 'nova-super-conductor', 'enabled': 'no', 'image': 'registry.osism.tech/kolla/release/nova-super-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-super-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-29 01:14:32.994075 | orchestrator | 2025-05-29 01:14:32.994084 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-05-29 01:14:32.994100 | orchestrator | Thursday 29 May 2025 01:09:49 +0000 (0:00:02.060) 0:03:28.676 ********** 2025-05-29 01:14:32.994108 | orchestrator | 2025-05-29 01:14:32.994117 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-05-29 01:14:32.994125 | orchestrator | Thursday 29 May 2025 01:09:49 +0000 (0:00:00.269) 0:03:28.946 ********** 2025-05-29 01:14:32.994132 | orchestrator | 2025-05-29 01:14:32.994140 | orchestrator | TASK [nova : Flush handlers] *************************************************** 2025-05-29 01:14:32.994148 | orchestrator | Thursday 29 May 2025 01:09:49 +0000 (0:00:00.111) 0:03:29.057 ********** 2025-05-29 01:14:32.994156 | orchestrator | 2025-05-29 01:14:32.994164 | orchestrator | RUNNING HANDLER [nova : Restart nova-scheduler container] ********************** 2025-05-29 01:14:32.994178 | orchestrator | Thursday 29 May 2025 01:09:50 +0000 (0:00:00.243) 0:03:29.301 ********** 2025-05-29 01:14:32.994218 | orchestrator | changed: [testbed-node-0] 2025-05-29 01:14:32.994226 | orchestrator | changed: [testbed-node-2] 2025-05-29 01:14:32.994254 | orchestrator | changed: [testbed-node-1] 2025-05-29 01:14:32.994262 | orchestrator | 2025-05-29 01:14:32.994291 | orchestrator | RUNNING HANDLER [nova : Restart nova-api container] **************************** 2025-05-29 01:14:32.994299 | orchestrator | Thursday 29 May 2025 01:10:11 +0000 (0:00:21.869) 0:03:51.171 ********** 2025-05-29 01:14:32.994307 | orchestrator | changed: [testbed-node-0] 2025-05-29 01:14:32.994315 | orchestrator | changed: [testbed-node-2] 2025-05-29 01:14:32.994323 | orchestrator | changed: [testbed-node-1] 2025-05-29 01:14:32.994331 | orchestrator | 2025-05-29 01:14:32.994338 | orchestrator | PLAY [Apply role nova-cell] **************************************************** 2025-05-29 01:14:32.994346 | orchestrator | 2025-05-29 01:14:32.994354 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-05-29 01:14:32.994362 | orchestrator | Thursday 29 May 2025 01:10:17 +0000 (0:00:05.827) 0:03:56.998 ********** 2025-05-29 01:14:32.994371 | orchestrator | included: /ansible/roles/nova-cell/tasks/deploy.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-29 01:14:32.994380 | orchestrator | 2025-05-29 01:14:32.994388 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-05-29 01:14:32.994402 | orchestrator | Thursday 29 May 2025 01:10:19 +0000 (0:00:01.377) 0:03:58.376 ********** 2025-05-29 01:14:32.994415 | orchestrator | skipping: [testbed-node-3] 2025-05-29 01:14:32.994423 | orchestrator | skipping: [testbed-node-4] 2025-05-29 01:14:32.994430 | orchestrator | skipping: [testbed-node-5] 2025-05-29 01:14:32.994438 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:14:32.994446 | orchestrator | skipping: [testbed-node-1] 2025-05-29 01:14:32.994454 | orchestrator | skipping: [testbed-node-2] 2025-05-29 01:14:32.994461 | orchestrator | 2025-05-29 01:14:32.994469 | orchestrator | TASK [Load and persist br_netfilter module] ************************************ 2025-05-29 01:14:32.994477 | orchestrator | Thursday 29 May 2025 01:10:19 +0000 (0:00:00.690) 0:03:59.067 ********** 2025-05-29 01:14:32.994485 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:14:32.994493 | orchestrator | skipping: [testbed-node-1] 2025-05-29 01:14:32.994500 | orchestrator | skipping: [testbed-node-2] 2025-05-29 01:14:32.994508 | orchestrator | included: module-load for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-29 01:14:32.994516 | orchestrator | 2025-05-29 01:14:32.994524 | orchestrator | TASK [module-load : Load modules] ********************************************** 2025-05-29 01:14:32.994532 | orchestrator | Thursday 29 May 2025 01:10:20 +0000 (0:00:01.059) 0:04:00.127 ********** 2025-05-29 01:14:32.994540 | orchestrator | ok: [testbed-node-4] => (item=br_netfilter) 2025-05-29 01:14:32.994548 | orchestrator | ok: [testbed-node-3] => (item=br_netfilter) 2025-05-29 01:14:32.994556 | orchestrator | ok: [testbed-node-5] => (item=br_netfilter) 2025-05-29 01:14:32.994602 | orchestrator | 2025-05-29 01:14:32.994612 | orchestrator | TASK [module-load : Persist modules via modules-load.d] ************************ 2025-05-29 01:14:32.994621 | orchestrator | Thursday 29 May 2025 01:10:21 +0000 (0:00:00.839) 0:04:00.966 ********** 2025-05-29 01:14:32.994630 | orchestrator | changed: [testbed-node-3] => (item=br_netfilter) 2025-05-29 01:14:32.994639 | orchestrator | changed: [testbed-node-4] => (item=br_netfilter) 2025-05-29 01:14:32.994705 | orchestrator | changed: [testbed-node-5] => (item=br_netfilter) 2025-05-29 01:14:32.994715 | orchestrator | 2025-05-29 01:14:32.994725 | orchestrator | TASK [module-load : Drop module persistence] *********************************** 2025-05-29 01:14:32.994733 | orchestrator | Thursday 29 May 2025 01:10:23 +0000 (0:00:01.292) 0:04:02.258 ********** 2025-05-29 01:14:32.994742 | orchestrator | skipping: [testbed-node-3] => (item=br_netfilter)  2025-05-29 01:14:32.994751 | orchestrator | skipping: [testbed-node-3] 2025-05-29 01:14:32.994760 | orchestrator | skipping: [testbed-node-4] => (item=br_netfilter)  2025-05-29 01:14:32.994769 | orchestrator | skipping: [testbed-node-4] 2025-05-29 01:14:32.994778 | orchestrator | skipping: [testbed-node-5] => (item=br_netfilter)  2025-05-29 01:14:32.994787 | orchestrator | skipping: [testbed-node-5] 2025-05-29 01:14:32.994796 | orchestrator | 2025-05-29 01:14:32.994805 | orchestrator | TASK [nova-cell : Enable bridge-nf-call sysctl variables] ********************** 2025-05-29 01:14:32.994813 | orchestrator | Thursday 29 May 2025 01:10:23 +0000 (0:00:00.638) 0:04:02.897 ********** 2025-05-29 01:14:32.994822 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-iptables)  2025-05-29 01:14:32.994831 | orchestrator | skipping: [testbed-node-0] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-05-29 01:14:32.994840 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:14:32.994849 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-iptables)  2025-05-29 01:14:32.994859 | orchestrator | skipping: [testbed-node-1] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-05-29 01:14:32.994868 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-iptables) 2025-05-29 01:14:32.994876 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-iptables) 2025-05-29 01:14:32.994885 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-iptables) 2025-05-29 01:14:32.994894 | orchestrator | skipping: [testbed-node-1] 2025-05-29 01:14:32.994903 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-iptables)  2025-05-29 01:14:32.994918 | orchestrator | skipping: [testbed-node-2] => (item=net.bridge.bridge-nf-call-ip6tables)  2025-05-29 01:14:32.994926 | orchestrator | skipping: [testbed-node-2] 2025-05-29 01:14:32.994934 | orchestrator | changed: [testbed-node-3] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-05-29 01:14:32.994941 | orchestrator | changed: [testbed-node-4] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-05-29 01:14:32.994949 | orchestrator | changed: [testbed-node-5] => (item=net.bridge.bridge-nf-call-ip6tables) 2025-05-29 01:14:32.994957 | orchestrator | 2025-05-29 01:14:32.994970 | orchestrator | TASK [nova-cell : Install udev kolla kvm rules] ******************************** 2025-05-29 01:14:32.994978 | orchestrator | Thursday 29 May 2025 01:10:24 +0000 (0:00:01.254) 0:04:04.151 ********** 2025-05-29 01:14:32.994986 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:14:32.994994 | orchestrator | skipping: [testbed-node-1] 2025-05-29 01:14:32.995002 | orchestrator | skipping: [testbed-node-2] 2025-05-29 01:14:32.995009 | orchestrator | changed: [testbed-node-3] 2025-05-29 01:14:32.995017 | orchestrator | changed: [testbed-node-4] 2025-05-29 01:14:32.995025 | orchestrator | changed: [testbed-node-5] 2025-05-29 01:14:32.995032 | orchestrator | 2025-05-29 01:14:32.995040 | orchestrator | TASK [nova-cell : Mask qemu-kvm service] *************************************** 2025-05-29 01:14:32.995048 | orchestrator | Thursday 29 May 2025 01:10:26 +0000 (0:00:01.126) 0:04:05.278 ********** 2025-05-29 01:14:32.995056 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:14:32.995064 | orchestrator | skipping: [testbed-node-1] 2025-05-29 01:14:32.995071 | orchestrator | skipping: [testbed-node-2] 2025-05-29 01:14:32.995079 | orchestrator | changed: [testbed-node-4] 2025-05-29 01:14:32.995087 | orchestrator | changed: [testbed-node-5] 2025-05-29 01:14:32.995094 | orchestrator | changed: [testbed-node-3] 2025-05-29 01:14:32.995102 | orchestrator | 2025-05-29 01:14:32.995110 | orchestrator | TASK [nova-cell : Ensuring config directories exist] *************************** 2025-05-29 01:14:32.995118 | orchestrator | Thursday 29 May 2025 01:10:27 +0000 (0:00:01.848) 0:04:07.126 ********** 2025-05-29 01:14:32.995131 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-29 01:14:32.995141 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-29 01:14:32.995150 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-29 01:14:32.995167 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-29 01:14:32.995177 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-29 01:14:32.995189 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-29 01:14:32.995198 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-29 01:14:32.995220 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-29 01:14:32.995253 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:14:32.995263 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-29 01:14:32.995286 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-29 01:14:32.995295 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-29 01:14:32.995308 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-29 01:14:32.995317 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:14:32.995342 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-29 01:14:32.995359 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-29 01:14:32.995368 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-29 01:14:32.995382 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:14:32.995391 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-29 01:14:32.995403 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-29 01:14:32.995412 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-29 01:14:32.995420 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:14:32.995458 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-29 01:14:32.995468 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-29 01:14:32.995482 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-29 01:14:32.995495 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-29 01:14:32.995504 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-29 01:14:32.995518 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-29 01:14:32.995526 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-29 01:14:32.995587 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-29 01:14:32.995597 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-29 01:14:32.995609 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-29 01:14:32.995618 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-29 01:14:32.995634 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-29 01:14:32.995643 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-29 01:14:32.995655 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-29 01:14:32.995664 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-29 01:14:32.995676 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-29 01:14:32.995685 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:14:32.995698 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-29 01:14:32.995706 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-29 01:14:32.995715 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:14:32.995727 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-29 01:14:32.995736 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-29 01:14:32.995748 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-29 01:14:32.995756 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-29 01:14:32.995771 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-29 01:14:32.995779 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-29 01:14:32.995788 | orchestrator | 2025-05-29 01:14:32.995796 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-05-29 01:14:32.995804 | orchestrator | Thursday 29 May 2025 01:10:30 +0000 (0:00:02.718) 0:04:09.844 ********** 2025-05-29 01:14:32.995812 | orchestrator | included: /ansible/roles/nova-cell/tasks/copy-certs.yml for testbed-node-3, testbed-node-4, testbed-node-5, testbed-node-0, testbed-node-1, testbed-node-2 2025-05-29 01:14:32.995821 | orchestrator | 2025-05-29 01:14:32.995829 | orchestrator | TASK [service-cert-copy : nova | Copying over extra CA certificates] *********** 2025-05-29 01:14:32.995837 | orchestrator | Thursday 29 May 2025 01:10:32 +0000 (0:00:01.415) 0:04:11.260 ********** 2025-05-29 01:14:32.995851 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-29 01:14:32.995863 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-29 01:14:32.995878 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-29 01:14:32.995886 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-29 01:14:32.995895 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-29 01:14:32.995908 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-29 01:14:32.995916 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-29 01:14:32.995929 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-29 01:14:32.995943 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-29 01:14:32.995952 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-29 01:14:32.995960 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-29 01:14:32.995969 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-29 01:14:32.995982 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-29 01:14:32.995995 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-29 01:14:32.996008 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-29 01:14:32.996016 | orchestrator | 2025-05-29 01:14:32.996024 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS certificate] *** 2025-05-29 01:14:32.996032 | orchestrator | Thursday 29 May 2025 01:10:35 +0000 (0:00:03.780) 0:04:15.040 ********** 2025-05-29 01:14:32.996051 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-29 01:14:32.996060 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-29 01:14:32.996073 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-29 01:14:32.996081 | orchestrator | skipping: [testbed-node-3] 2025-05-29 01:14:32.996117 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-29 01:14:32.996133 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-29 01:14:32.996141 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-29 01:14:32.996150 | orchestrator | skipping: [testbed-node-5] 2025-05-29 01:14:32.996158 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-29 01:14:32.996202 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-29 01:14:32.996217 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-29 01:14:32.996277 | orchestrator | skipping: [testbed-node-4] 2025-05-29 01:14:32.996287 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-29 01:14:32.996296 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-29 01:14:32.996304 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:14:32.996334 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-29 01:14:32.996344 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-29 01:14:32.996353 | orchestrator | skipping: [testbed-node-1] 2025-05-29 01:14:32.996367 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-29 01:14:32.996376 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-29 01:14:32.996392 | orchestrator | skipping: [testbed-node-2] 2025-05-29 01:14:32.996400 | orchestrator | 2025-05-29 01:14:32.996408 | orchestrator | TASK [service-cert-copy : nova | Copying over backend internal TLS key] ******** 2025-05-29 01:14:32.996416 | orchestrator | Thursday 29 May 2025 01:10:37 +0000 (0:00:01.921) 0:04:16.962 ********** 2025-05-29 01:14:32.996429 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-29 01:14:32.996437 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-29 01:14:32.996444 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-29 01:14:32.996451 | orchestrator | skipping: [testbed-node-3] 2025-05-29 01:14:32.997149 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-29 01:14:32.997176 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-29 01:14:32.997189 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-29 01:14:32.997197 | orchestrator | skipping: [testbed-node-4] 2025-05-29 01:14:32.997204 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-29 01:14:32.997211 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-29 01:14:32.997219 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-29 01:14:32.997226 | orchestrator | skipping: [testbed-node-5] 2025-05-29 01:14:32.997280 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-29 01:14:32.997289 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-29 01:14:32.997300 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-29 01:14:32.997307 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:14:32.997314 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-29 01:14:32.997321 | orchestrator | skipping: [testbed-node-1] 2025-05-29 01:14:32.997328 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-29 01:14:32.997335 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-29 01:14:32.997342 | orchestrator | skipping: [testbed-node-2] 2025-05-29 01:14:32.997349 | orchestrator | 2025-05-29 01:14:32.997356 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-05-29 01:14:32.997368 | orchestrator | Thursday 29 May 2025 01:10:40 +0000 (0:00:02.512) 0:04:19.474 ********** 2025-05-29 01:14:32.997374 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:14:32.997381 | orchestrator | skipping: [testbed-node-1] 2025-05-29 01:14:32.997388 | orchestrator | skipping: [testbed-node-2] 2025-05-29 01:14:32.997395 | orchestrator | included: /ansible/roles/nova-cell/tasks/external_ceph.yml for testbed-node-3, testbed-node-4, testbed-node-5 2025-05-29 01:14:32.997402 | orchestrator | 2025-05-29 01:14:32.997408 | orchestrator | TASK [nova-cell : Check nova keyring file] ************************************* 2025-05-29 01:14:32.997415 | orchestrator | Thursday 29 May 2025 01:10:41 +0000 (0:00:01.243) 0:04:20.717 ********** 2025-05-29 01:14:32.997440 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-05-29 01:14:32.997448 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-05-29 01:14:32.997455 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-05-29 01:14:32.997462 | orchestrator | 2025-05-29 01:14:32.997469 | orchestrator | TASK [nova-cell : Check cinder keyring file] *********************************** 2025-05-29 01:14:32.997475 | orchestrator | Thursday 29 May 2025 01:10:42 +0000 (0:00:00.745) 0:04:21.462 ********** 2025-05-29 01:14:32.997482 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-05-29 01:14:32.997489 | orchestrator | ok: [testbed-node-4 -> localhost] 2025-05-29 01:14:32.997495 | orchestrator | ok: [testbed-node-5 -> localhost] 2025-05-29 01:14:32.997502 | orchestrator | 2025-05-29 01:14:32.997509 | orchestrator | TASK [nova-cell : Extract nova key from file] ********************************** 2025-05-29 01:14:32.997515 | orchestrator | Thursday 29 May 2025 01:10:42 +0000 (0:00:00.614) 0:04:22.077 ********** 2025-05-29 01:14:32.997522 | orchestrator | ok: [testbed-node-3] 2025-05-29 01:14:32.997529 | orchestrator | ok: [testbed-node-4] 2025-05-29 01:14:32.997536 | orchestrator | ok: [testbed-node-5] 2025-05-29 01:14:32.997542 | orchestrator | 2025-05-29 01:14:32.997549 | orchestrator | TASK [nova-cell : Extract cinder key from file] ******************************** 2025-05-29 01:14:32.997556 | orchestrator | Thursday 29 May 2025 01:10:43 +0000 (0:00:00.544) 0:04:22.621 ********** 2025-05-29 01:14:32.997562 | orchestrator | ok: [testbed-node-3] 2025-05-29 01:14:32.997569 | orchestrator | ok: [testbed-node-4] 2025-05-29 01:14:32.997576 | orchestrator | ok: [testbed-node-5] 2025-05-29 01:14:32.997582 | orchestrator | 2025-05-29 01:14:32.997589 | orchestrator | TASK [nova-cell : Copy over ceph nova keyring file] **************************** 2025-05-29 01:14:32.997599 | orchestrator | Thursday 29 May 2025 01:10:43 +0000 (0:00:00.376) 0:04:22.998 ********** 2025-05-29 01:14:32.997606 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-05-29 01:14:32.997613 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-05-29 01:14:32.997619 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-05-29 01:14:32.997626 | orchestrator | 2025-05-29 01:14:32.997633 | orchestrator | TASK [nova-cell : Copy over ceph cinder keyring file] ************************** 2025-05-29 01:14:32.997639 | orchestrator | Thursday 29 May 2025 01:10:45 +0000 (0:00:01.263) 0:04:24.261 ********** 2025-05-29 01:14:32.997646 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-05-29 01:14:32.997653 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-05-29 01:14:32.997659 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-05-29 01:14:32.997666 | orchestrator | 2025-05-29 01:14:32.997673 | orchestrator | TASK [nova-cell : Copy over ceph.conf] ***************************************** 2025-05-29 01:14:32.997679 | orchestrator | Thursday 29 May 2025 01:10:46 +0000 (0:00:01.197) 0:04:25.459 ********** 2025-05-29 01:14:32.997686 | orchestrator | changed: [testbed-node-4] => (item=nova-compute) 2025-05-29 01:14:32.997692 | orchestrator | changed: [testbed-node-3] => (item=nova-compute) 2025-05-29 01:14:32.997699 | orchestrator | changed: [testbed-node-5] => (item=nova-compute) 2025-05-29 01:14:32.997706 | orchestrator | changed: [testbed-node-4] => (item=nova-libvirt) 2025-05-29 01:14:32.997712 | orchestrator | changed: [testbed-node-3] => (item=nova-libvirt) 2025-05-29 01:14:32.997719 | orchestrator | changed: [testbed-node-5] => (item=nova-libvirt) 2025-05-29 01:14:32.997731 | orchestrator | 2025-05-29 01:14:32.997738 | orchestrator | TASK [nova-cell : Ensure /etc/ceph directory exists (host libvirt)] ************ 2025-05-29 01:14:32.997745 | orchestrator | Thursday 29 May 2025 01:10:51 +0000 (0:00:04.918) 0:04:30.378 ********** 2025-05-29 01:14:32.997751 | orchestrator | skipping: [testbed-node-3] 2025-05-29 01:14:32.997760 | orchestrator | skipping: [testbed-node-4] 2025-05-29 01:14:32.997768 | orchestrator | skipping: [testbed-node-5] 2025-05-29 01:14:32.997775 | orchestrator | 2025-05-29 01:14:32.997783 | orchestrator | TASK [nova-cell : Copy over ceph.conf (host libvirt)] ************************** 2025-05-29 01:14:32.997791 | orchestrator | Thursday 29 May 2025 01:10:51 +0000 (0:00:00.469) 0:04:30.847 ********** 2025-05-29 01:14:32.997799 | orchestrator | skipping: [testbed-node-3] 2025-05-29 01:14:32.997807 | orchestrator | skipping: [testbed-node-4] 2025-05-29 01:14:32.997814 | orchestrator | skipping: [testbed-node-5] 2025-05-29 01:14:32.997822 | orchestrator | 2025-05-29 01:14:32.997830 | orchestrator | TASK [nova-cell : Ensuring libvirt secrets directory exists] ******************* 2025-05-29 01:14:32.997837 | orchestrator | Thursday 29 May 2025 01:10:52 +0000 (0:00:00.459) 0:04:31.307 ********** 2025-05-29 01:14:32.997845 | orchestrator | changed: [testbed-node-3] 2025-05-29 01:14:32.997852 | orchestrator | changed: [testbed-node-4] 2025-05-29 01:14:32.997860 | orchestrator | changed: [testbed-node-5] 2025-05-29 01:14:32.997867 | orchestrator | 2025-05-29 01:14:32.997875 | orchestrator | TASK [nova-cell : Pushing nova secret xml for libvirt] ************************* 2025-05-29 01:14:32.997883 | orchestrator | Thursday 29 May 2025 01:10:53 +0000 (0:00:01.349) 0:04:32.656 ********** 2025-05-29 01:14:32.997891 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-05-29 01:14:32.997900 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-05-29 01:14:32.997908 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '5a2bf0bf-e1ab-4a6a-bc32-404bb6ba91fd', 'name': 'client.nova secret', 'enabled': True}) 2025-05-29 01:14:32.997916 | orchestrator | changed: [testbed-node-3] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-05-29 01:14:32.997923 | orchestrator | changed: [testbed-node-4] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-05-29 01:14:32.997931 | orchestrator | changed: [testbed-node-5] => (item={'uuid': '63dd366f-e403-41f2-beff-dad9980a1637', 'name': 'client.cinder secret', 'enabled': 'yes'}) 2025-05-29 01:14:32.997939 | orchestrator | 2025-05-29 01:14:32.997947 | orchestrator | TASK [nova-cell : Pushing secrets key for libvirt] ***************************** 2025-05-29 01:14:32.997973 | orchestrator | Thursday 29 May 2025 01:10:56 +0000 (0:00:03.395) 0:04:36.052 ********** 2025-05-29 01:14:32.997982 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-05-29 01:14:32.997989 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-05-29 01:14:32.997997 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-05-29 01:14:32.998005 | orchestrator | changed: [testbed-node-3] => (item=None) 2025-05-29 01:14:32.998013 | orchestrator | changed: [testbed-node-3] 2025-05-29 01:14:32.998043 | orchestrator | changed: [testbed-node-4] => (item=None) 2025-05-29 01:14:32.998051 | orchestrator | changed: [testbed-node-4] 2025-05-29 01:14:32.998057 | orchestrator | changed: [testbed-node-5] => (item=None) 2025-05-29 01:14:32.998064 | orchestrator | changed: [testbed-node-5] 2025-05-29 01:14:32.998071 | orchestrator | 2025-05-29 01:14:32.998078 | orchestrator | TASK [nova-cell : Check if policies shall be overwritten] ********************** 2025-05-29 01:14:32.998084 | orchestrator | Thursday 29 May 2025 01:11:00 +0000 (0:00:03.391) 0:04:39.444 ********** 2025-05-29 01:14:32.998091 | orchestrator | skipping: [testbed-node-3] 2025-05-29 01:14:32.998098 | orchestrator | 2025-05-29 01:14:32.998104 | orchestrator | TASK [nova-cell : Set nova policy file] **************************************** 2025-05-29 01:14:32.998117 | orchestrator | Thursday 29 May 2025 01:11:00 +0000 (0:00:00.122) 0:04:39.567 ********** 2025-05-29 01:14:32.998123 | orchestrator | skipping: [testbed-node-3] 2025-05-29 01:14:32.998130 | orchestrator | skipping: [testbed-node-4] 2025-05-29 01:14:32.998137 | orchestrator | skipping: [testbed-node-5] 2025-05-29 01:14:32.998143 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:14:32.998150 | orchestrator | skipping: [testbed-node-1] 2025-05-29 01:14:32.998160 | orchestrator | skipping: [testbed-node-2] 2025-05-29 01:14:32.998167 | orchestrator | 2025-05-29 01:14:32.998174 | orchestrator | TASK [nova-cell : Check for vendordata file] *********************************** 2025-05-29 01:14:32.998181 | orchestrator | Thursday 29 May 2025 01:11:01 +0000 (0:00:00.869) 0:04:40.436 ********** 2025-05-29 01:14:32.998187 | orchestrator | ok: [testbed-node-3 -> localhost] 2025-05-29 01:14:32.998194 | orchestrator | 2025-05-29 01:14:32.998201 | orchestrator | TASK [nova-cell : Set vendordata file path] ************************************ 2025-05-29 01:14:32.998208 | orchestrator | Thursday 29 May 2025 01:11:01 +0000 (0:00:00.386) 0:04:40.823 ********** 2025-05-29 01:14:32.998214 | orchestrator | skipping: [testbed-node-3] 2025-05-29 01:14:32.998221 | orchestrator | skipping: [testbed-node-4] 2025-05-29 01:14:32.998228 | orchestrator | skipping: [testbed-node-5] 2025-05-29 01:14:32.998248 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:14:32.998254 | orchestrator | skipping: [testbed-node-1] 2025-05-29 01:14:32.998261 | orchestrator | skipping: [testbed-node-2] 2025-05-29 01:14:32.998267 | orchestrator | 2025-05-29 01:14:32.998274 | orchestrator | TASK [nova-cell : Copying over config.json files for services] ***************** 2025-05-29 01:14:32.998281 | orchestrator | Thursday 29 May 2025 01:11:02 +0000 (0:00:00.699) 0:04:41.523 ********** 2025-05-29 01:14:32.998288 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-29 01:14:32.998296 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-29 01:14:32.998324 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-29 01:14:32.998337 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-29 01:14:32.998349 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-29 01:14:32.998356 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-29 01:14:32.998363 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-29 01:14:32.998371 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-29 01:14:32.998402 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-29 01:14:32.998414 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-29 01:14:32.998422 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-29 01:14:32.998430 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:14:32.998437 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-29 01:14:32.998444 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-29 01:14:32.998470 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:14:32.998486 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-29 01:14:32.998496 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-29 01:14:32.998504 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:14:32.998511 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-29 01:14:32.998518 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-29 01:14:32.998525 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-29 01:14:32.998537 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:14:32.998563 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-29 01:14:32.998575 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-29 01:14:32.998582 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-29 01:14:32.998589 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-29 01:14:32.998597 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:14:32.998604 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-29 01:14:32.998616 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-29 01:14:32.998641 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-29 01:14:32.998653 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-29 01:14:32.998660 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:14:32.998667 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-29 01:14:32.998675 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-29 01:14:32.998682 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-29 01:14:32.998712 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-29 01:14:32.998721 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-29 01:14:32.998731 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-29 01:14:32.998738 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-29 01:14:32.998745 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-29 01:14:32.998752 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-29 01:14:32.998781 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-29 01:14:32.998790 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-29 01:14:32.998801 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-29 01:14:32.998808 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-29 01:14:32.998815 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-29 01:14:32.998826 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-29 01:14:32.998852 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-29 01:14:32.998860 | orchestrator | 2025-05-29 01:14:32.998867 | orchestrator | TASK [nova-cell : Copying over nova.conf] ************************************** 2025-05-29 01:14:32.998874 | orchestrator | Thursday 29 May 2025 01:11:06 +0000 (0:00:03.918) 0:04:45.441 ********** 2025-05-29 01:14:32.998885 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-29 01:14:32.998892 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-29 01:14:32.998899 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-29 01:14:32.998906 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-29 01:14:32.998917 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:14:32.998943 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-29 01:14:32.998955 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-29 01:14:32.998962 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-29 01:14:32.998970 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-29 01:14:32.998977 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-29 01:14:32.998991 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:14:32.998998 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-29 01:14:32.999024 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-29 01:14:32.999038 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-29 01:14:32.999046 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-29 01:14:32.999053 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-29 01:14:32.999065 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-29 01:14:32.999072 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-29 01:14:32.999097 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:14:32.999106 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-29 01:14:32.999117 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-29 01:14:32.999124 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-29 01:14:32.999135 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-29 01:14:32.999142 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-29 01:14:32.999168 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-29 01:14:32.999181 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-29 01:14:32.999189 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-29 01:14:32.999200 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-29 01:14:32.999207 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:14:32.999214 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-29 01:14:32.999281 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-29 01:14:32.999295 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-29 01:14:32.999303 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-29 01:14:32.999315 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-29 01:14:32.999322 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-29 01:14:32.999329 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:14:32.999354 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-29 01:14:32.999362 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-29 01:14:32.999373 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:14:32.999380 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-29 01:14:32.999392 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-29 01:14:32.999399 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-29 01:14:32.999406 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-29 01:14:32.999433 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-29 01:14:32.999444 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-29 01:14:32.999452 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-29 01:14:32.999463 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-29 01:14:32.999470 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-29 01:14:32.999477 | orchestrator | 2025-05-29 01:14:32.999484 | orchestrator | TASK [nova-cell : Copying over Nova compute provider config] ******************* 2025-05-29 01:14:32.999491 | orchestrator | Thursday 29 May 2025 01:11:13 +0000 (0:00:06.824) 0:04:52.266 ********** 2025-05-29 01:14:32.999498 | orchestrator | skipping: [testbed-node-4] 2025-05-29 01:14:32.999505 | orchestrator | skipping: [testbed-node-3] 2025-05-29 01:14:32.999511 | orchestrator | skipping: [testbed-node-5] 2025-05-29 01:14:32.999518 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:14:32.999525 | orchestrator | skipping: [testbed-node-1] 2025-05-29 01:14:32.999531 | orchestrator | skipping: [testbed-node-2] 2025-05-29 01:14:32.999538 | orchestrator | 2025-05-29 01:14:32.999544 | orchestrator | TASK [nova-cell : Copying over libvirt configuration] ************************** 2025-05-29 01:14:32.999551 | orchestrator | Thursday 29 May 2025 01:11:14 +0000 (0:00:01.399) 0:04:53.666 ********** 2025-05-29 01:14:32.999558 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-05-29 01:14:32.999565 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-05-29 01:14:32.999571 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'})  2025-05-29 01:14:32.999578 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-05-29 01:14:32.999585 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:14:32.999610 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-05-29 01:14:32.999618 | orchestrator | skipping: [testbed-node-2] 2025-05-29 01:14:32.999625 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'})  2025-05-29 01:14:32.999632 | orchestrator | skipping: [testbed-node-1] 2025-05-29 01:14:32.999638 | orchestrator | changed: [testbed-node-3] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-05-29 01:14:32.999645 | orchestrator | changed: [testbed-node-5] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-05-29 01:14:32.999651 | orchestrator | changed: [testbed-node-4] => (item={'src': 'qemu.conf.j2', 'dest': 'qemu.conf'}) 2025-05-29 01:14:32.999658 | orchestrator | changed: [testbed-node-4] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-05-29 01:14:32.999668 | orchestrator | changed: [testbed-node-3] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-05-29 01:14:32.999675 | orchestrator | changed: [testbed-node-5] => (item={'src': 'libvirtd.conf.j2', 'dest': 'libvirtd.conf'}) 2025-05-29 01:14:32.999682 | orchestrator | 2025-05-29 01:14:32.999688 | orchestrator | TASK [nova-cell : Copying over libvirt TLS keys] ******************************* 2025-05-29 01:14:32.999695 | orchestrator | Thursday 29 May 2025 01:11:19 +0000 (0:00:04.719) 0:04:58.386 ********** 2025-05-29 01:14:32.999702 | orchestrator | skipping: [testbed-node-3] 2025-05-29 01:14:32.999712 | orchestrator | skipping: [testbed-node-4] 2025-05-29 01:14:32.999719 | orchestrator | skipping: [testbed-node-5] 2025-05-29 01:14:32.999725 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:14:32.999732 | orchestrator | skipping: [testbed-node-1] 2025-05-29 01:14:32.999738 | orchestrator | skipping: [testbed-node-2] 2025-05-29 01:14:32.999745 | orchestrator | 2025-05-29 01:14:32.999751 | orchestrator | TASK [nova-cell : Copying over libvirt SASL configuration] ********************* 2025-05-29 01:14:32.999758 | orchestrator | Thursday 29 May 2025 01:11:20 +0000 (0:00:00.898) 0:04:59.284 ********** 2025-05-29 01:14:32.999765 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-05-29 01:14:32.999772 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-05-29 01:14:32.999779 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'})  2025-05-29 01:14:32.999785 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-05-29 01:14:32.999791 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-05-29 01:14:32.999798 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'})  2025-05-29 01:14:32.999804 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-05-29 01:14:32.999810 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-05-29 01:14:32.999816 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-compute'}) 2025-05-29 01:14:32.999822 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-05-29 01:14:32.999828 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:14:32.999834 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-05-29 01:14:32.999841 | orchestrator | skipping: [testbed-node-2] 2025-05-29 01:14:32.999847 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'})  2025-05-29 01:14:32.999853 | orchestrator | skipping: [testbed-node-1] 2025-05-29 01:14:32.999859 | orchestrator | changed: [testbed-node-5] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-05-29 01:14:32.999865 | orchestrator | changed: [testbed-node-4] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-05-29 01:14:32.999871 | orchestrator | changed: [testbed-node-3] => (item={'src': 'auth.conf.j2', 'dest': 'auth.conf', 'service': 'nova-libvirt'}) 2025-05-29 01:14:32.999877 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-05-29 01:14:32.999883 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-05-29 01:14:32.999890 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sasl.conf.j2', 'dest': 'sasl.conf', 'service': 'nova-libvirt'}) 2025-05-29 01:14:32.999900 | orchestrator | 2025-05-29 01:14:32.999906 | orchestrator | TASK [nova-cell : Copying files for nova-ssh] ********************************** 2025-05-29 01:14:32.999913 | orchestrator | Thursday 29 May 2025 01:11:27 +0000 (0:00:07.531) 0:05:06.816 ********** 2025-05-29 01:14:32.999919 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-05-29 01:14:32.999925 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-05-29 01:14:32.999947 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-05-29 01:14:32.999955 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'})  2025-05-29 01:14:32.999961 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-05-29 01:14:32.999967 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-05-29 01:14:32.999973 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa', 'dest': 'id_rsa'})  2025-05-29 01:14:32.999979 | orchestrator | changed: [testbed-node-4] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-05-29 01:14:32.999986 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-05-29 01:14:32.999992 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'})  2025-05-29 01:14:32.999998 | orchestrator | changed: [testbed-node-3] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-05-29 01:14:33.000004 | orchestrator | skipping: [testbed-node-0] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-05-29 01:14:33.000010 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:14:33.000017 | orchestrator | changed: [testbed-node-5] => (item={'src': 'sshd_config.j2', 'dest': 'sshd_config'}) 2025-05-29 01:14:33.000028 | orchestrator | skipping: [testbed-node-2] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-05-29 01:14:33.000035 | orchestrator | skipping: [testbed-node-2] 2025-05-29 01:14:33.000041 | orchestrator | skipping: [testbed-node-1] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'})  2025-05-29 01:14:33.000047 | orchestrator | skipping: [testbed-node-1] 2025-05-29 01:14:33.000053 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-05-29 01:14:33.000060 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-05-29 01:14:33.000066 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa', 'dest': 'id_rsa'}) 2025-05-29 01:14:33.000072 | orchestrator | changed: [testbed-node-4] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-05-29 01:14:33.000078 | orchestrator | changed: [testbed-node-3] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-05-29 01:14:33.000084 | orchestrator | changed: [testbed-node-5] => (item={'src': 'id_rsa.pub', 'dest': 'id_rsa.pub'}) 2025-05-29 01:14:33.000090 | orchestrator | changed: [testbed-node-4] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-05-29 01:14:33.000096 | orchestrator | changed: [testbed-node-3] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-05-29 01:14:33.000102 | orchestrator | changed: [testbed-node-5] => (item={'src': 'ssh_config.j2', 'dest': 'ssh_config'}) 2025-05-29 01:14:33.000109 | orchestrator | 2025-05-29 01:14:33.000115 | orchestrator | TASK [nova-cell : Copying VMware vCenter CA file] ****************************** 2025-05-29 01:14:33.000121 | orchestrator | Thursday 29 May 2025 01:11:37 +0000 (0:00:09.917) 0:05:16.734 ********** 2025-05-29 01:14:33.000127 | orchestrator | skipping: [testbed-node-3] 2025-05-29 01:14:33.000133 | orchestrator | skipping: [testbed-node-4] 2025-05-29 01:14:33.000140 | orchestrator | skipping: [testbed-node-5] 2025-05-29 01:14:33.000146 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:14:33.000152 | orchestrator | skipping: [testbed-node-1] 2025-05-29 01:14:33.000158 | orchestrator | skipping: [testbed-node-2] 2025-05-29 01:14:33.000164 | orchestrator | 2025-05-29 01:14:33.000170 | orchestrator | TASK [nova-cell : Copying 'release' file for nova_compute] ********************* 2025-05-29 01:14:33.000181 | orchestrator | Thursday 29 May 2025 01:11:38 +0000 (0:00:00.740) 0:05:17.475 ********** 2025-05-29 01:14:33.000187 | orchestrator | skipping: [testbed-node-3] 2025-05-29 01:14:33.000193 | orchestrator | skipping: [testbed-node-4] 2025-05-29 01:14:33.000199 | orchestrator | skipping: [testbed-node-5] 2025-05-29 01:14:33.000205 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:14:33.000211 | orchestrator | skipping: [testbed-node-1] 2025-05-29 01:14:33.000217 | orchestrator | skipping: [testbed-node-2] 2025-05-29 01:14:33.000223 | orchestrator | 2025-05-29 01:14:33.000243 | orchestrator | TASK [nova-cell : Generating 'hostnqn' file for nova_compute] ****************** 2025-05-29 01:14:33.000249 | orchestrator | Thursday 29 May 2025 01:11:39 +0000 (0:00:00.909) 0:05:18.384 ********** 2025-05-29 01:14:33.000256 | orchestrator | skipping: [testbed-node-1] 2025-05-29 01:14:33.000262 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:14:33.000268 | orchestrator | skipping: [testbed-node-2] 2025-05-29 01:14:33.000274 | orchestrator | changed: [testbed-node-4] 2025-05-29 01:14:33.000280 | orchestrator | changed: [testbed-node-3] 2025-05-29 01:14:33.000286 | orchestrator | changed: [testbed-node-5] 2025-05-29 01:14:33.000292 | orchestrator | 2025-05-29 01:14:33.000298 | orchestrator | TASK [nova-cell : Copying over existing policy file] *************************** 2025-05-29 01:14:33.000305 | orchestrator | Thursday 29 May 2025 01:11:41 +0000 (0:00:02.744) 0:05:21.129 ********** 2025-05-29 01:14:33.000330 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-29 01:14:33.000338 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-29 01:14:33.000348 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-29 01:14:33.000355 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-29 01:14:33.000367 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:14:33.000373 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-29 01:14:33.000380 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-29 01:14:33.000405 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-29 01:14:33.000413 | orchestrator | skipping: [testbed-node-3] 2025-05-29 01:14:33.000423 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-29 01:14:33.000430 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-29 01:14:33.000440 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-29 01:14:33.000446 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-29 01:14:33.000453 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:14:33.000477 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-29 01:14:33.000485 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-29 01:14:33.000495 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-29 01:14:33.000506 | orchestrator | skipping: [testbed-node-4] 2025-05-29 01:14:33.000513 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-29 01:14:33.000520 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-29 01:14:33.000526 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-29 01:14:33.000537 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-29 01:14:33.000544 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:14:33.000553 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-29 01:14:33.000564 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-29 01:14:33.000571 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-29 01:14:33.000577 | orchestrator | skipping: [testbed-node-5] 2025-05-29 01:14:33.000584 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-29 01:14:33.000594 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-29 01:14:33.000601 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-29 01:14:33.000611 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-29 01:14:33.000621 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:14:33.000628 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-29 01:14:33.000635 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-29 01:14:33.000641 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-29 01:14:33.000647 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:14:33.000658 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-29 01:14:33.000668 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-29 01:14:33.000679 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-29 01:14:33.000685 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-29 01:14:33.000692 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:14:33.000698 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-29 01:14:33.000708 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-29 01:14:33.000715 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-29 01:14:33.000725 | orchestrator | skipping: [testbed-node-1] 2025-05-29 01:14:33.000737 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-29 01:14:33.000744 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-29 01:14:33.000750 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-29 01:14:33.000757 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-29 01:14:33.000766 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:14:33.000773 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-29 01:14:33.000788 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-29 01:14:33.000795 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-29 01:14:33.000801 | orchestrator | skipping: [testbed-node-2] 2025-05-29 01:14:33.000808 | orchestrator | 2025-05-29 01:14:33.000814 | orchestrator | TASK [nova-cell : Copying over vendordata file to containers] ****************** 2025-05-29 01:14:33.000820 | orchestrator | Thursday 29 May 2025 01:11:43 +0000 (0:00:01.848) 0:05:22.977 ********** 2025-05-29 01:14:33.000827 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2025-05-29 01:14:33.000833 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2025-05-29 01:14:33.000840 | orchestrator | skipping: [testbed-node-3] 2025-05-29 01:14:33.000846 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2025-05-29 01:14:33.000852 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2025-05-29 01:14:33.000858 | orchestrator | skipping: [testbed-node-4] 2025-05-29 01:14:33.000864 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2025-05-29 01:14:33.000870 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2025-05-29 01:14:33.000877 | orchestrator | skipping: [testbed-node-5] 2025-05-29 01:14:33.000883 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2025-05-29 01:14:33.000889 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2025-05-29 01:14:33.000895 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:14:33.000901 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2025-05-29 01:14:33.000907 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2025-05-29 01:14:33.000914 | orchestrator | skipping: [testbed-node-1] 2025-05-29 01:14:33.000920 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2025-05-29 01:14:33.000926 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2025-05-29 01:14:33.000932 | orchestrator | skipping: [testbed-node-2] 2025-05-29 01:14:33.000938 | orchestrator | 2025-05-29 01:14:33.000944 | orchestrator | TASK [nova-cell : Check nova-cell containers] ********************************** 2025-05-29 01:14:33.000951 | orchestrator | Thursday 29 May 2025 01:11:44 +0000 (0:00:01.023) 0:05:24.000 ********** 2025-05-29 01:14:33.000961 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-29 01:14:33.000975 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-29 01:14:33.000982 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-29 01:14:33.000988 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-29 01:14:33.000995 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-29 01:14:33.001004 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-29 01:14:33.001015 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}}) 2025-05-29 01:14:33.001025 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-libvirt', 'value': {'container_name': 'nova_libvirt', 'group': 'compute', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-libvirt:8.0.0.20241206', 'pid_mode': 'host', 'cgroupns_mode': 'host', 'privileged': True, 'volumes': ['/etc/kolla/nova-libvirt/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', '', '/sys/fs/cgroup:/sys/fs/cgroup', 'kolla_logs:/var/log/kolla/', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', 'nova_libvirt_qemu:/etc/libvirt/qemu', ''], 'dimensions': {'ulimits': {'memlock': {'soft': 67108864, 'hard': 67108864}}}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'virsh version --daemon'], 'timeout': '30'}}})  2025-05-29 01:14:33.001032 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}})  2025-05-29 01:14:33.001039 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-29 01:14:33.001045 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-29 01:14:33.001058 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.14:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-29 01:14:33.001065 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:14:33.001074 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-29 01:14:33.001081 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-29 01:14:33.001088 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.10:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-29 01:14:33.001094 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:14:33.001101 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-29 01:14:33.001114 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.11:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-29 01:14:33.001120 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-29 01:14:33.001130 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:14:33.001137 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-29 01:14:33.001144 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.13:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-29 01:14:33.001150 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:14:33.001157 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-29 01:14:33.001167 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6080/vnc_lite.html'], 'timeout': '30'}}}) 2025-05-29 01:14:33.001177 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.12:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-29 01:14:33.001187 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-ssh', 'value': {'container_name': 'nova_ssh', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-ssh:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-ssh/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla', 'nova_compute:/var/lib/nova', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_listen sshd 8022'], 'timeout': '30'}}}) 2025-05-29 01:14:33.001194 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:14:33.001200 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-novncproxy', 'value': {'container_name': 'nova_novncproxy', 'group': 'nova-novncproxy', 'image': 'registry.osism.tech/kolla/release/nova-novncproxy:29.2.1.20241206', 'enabled': True, 'volumes': ['/etc/kolla/nova-novncproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6080/vnc_lite.html'], 'timeout': '30'}}})  2025-05-29 01:14:33.001207 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-spicehtml5proxy', 'value': {'container_name': 'nova_spicehtml5proxy', 'group': 'nova-spicehtml5proxy', 'image': 'registry.osism.tech/kolla/release/nova-spicehtml5proxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-spicehtml5proxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_curl http://192.168.16.15:6082/spice_auto.html'], 'timeout': '30'}}})  2025-05-29 01:14:33.001217 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-serialproxy', 'value': {'container_name': 'nova_serialproxy', 'group': 'nova-serialproxy', 'image': 'registry.osism.tech/kolla/release/nova-serialproxy:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-serialproxy/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}}})  2025-05-29 01:14:33.001224 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}})  2025-05-29 01:14:33.001266 | orchestrator | changed: [testbed-node-4] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-29 01:14:33.001277 | orchestrator | skipping: [testbed-node-4] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-29 01:14:33.001284 | orchestrator | changed: [testbed-node-0] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-29 01:14:33.001291 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-29 01:14:33.001297 | orchestrator | changed: [testbed-node-1] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-29 01:14:33.001310 | orchestrator | skipping: [testbed-node-0] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-29 01:14:33.001320 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-29 01:14:33.001330 | orchestrator | skipping: [testbed-node-1] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-29 01:14:33.001337 | orchestrator | changed: [testbed-node-3] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-29 01:14:33.001343 | orchestrator | changed: [testbed-node-2] => (item={'key': 'nova-conductor', 'value': {'container_name': 'nova_conductor', 'group': 'nova-conductor', 'enabled': True, 'image': 'registry.osism.tech/kolla/release/nova-conductor:29.2.1.20241206', 'volumes': ['/etc/kolla/nova-conductor/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-conductor 5672'], 'timeout': '30'}}}) 2025-05-29 01:14:33.001354 | orchestrator | skipping: [testbed-node-3] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-29 01:14:33.001361 | orchestrator | changed: [testbed-node-5] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}}) 2025-05-29 01:14:33.001371 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute', 'value': {'container_name': 'nova_compute', 'group': 'compute', 'image': 'registry.osism.tech/kolla/release/nova-compute:29.2.1.20241206', 'environment': {'LIBGUESTFS_BACKEND': 'direct'}, 'privileged': True, 'enabled': True, 'ipc_mode': 'host', 'volumes': ['/etc/kolla/nova-compute/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', '/lib/modules:/lib/modules:ro', '/run:/run:shared', '/dev:/dev', 'kolla_logs:/var/log/kolla/', 'iscsi_info:/etc/iscsi', 'libvirtd:/var/lib/libvirt', 'nova_compute:/var/lib/nova/', '', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-29 01:14:33.001381 | orchestrator | skipping: [testbed-node-5] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-29 01:14:33.001388 | orchestrator | skipping: [testbed-node-2] => (item={'key': 'nova-compute-ironic', 'value': {'container_name': 'nova_compute_ironic', 'group': 'nova-compute-ironic', 'image': 'registry.osism.tech/kolla/release/nova-compute-ironic:29.2.1.20241206', 'enabled': False, 'volumes': ['/etc/kolla/nova-compute-ironic/:/var/lib/kolla/config_files/:ro', '/etc/localtime:/etc/localtime:ro', '/etc/timezone:/etc/timezone:ro', 'kolla_logs:/var/log/kolla/', ''], 'dimensions': {}, 'healthcheck': {'interval': '30', 'retries': '3', 'start_period': '5', 'test': ['CMD-SHELL', 'healthcheck_port nova-compute 5672'], 'timeout': '30'}}})  2025-05-29 01:14:33.001394 | orchestrator | 2025-05-29 01:14:33.001401 | orchestrator | TASK [nova-cell : include_tasks] *********************************************** 2025-05-29 01:14:33.001407 | orchestrator | Thursday 29 May 2025 01:11:47 +0000 (0:00:03.227) 0:05:27.228 ********** 2025-05-29 01:14:33.001413 | orchestrator | skipping: [testbed-node-3] 2025-05-29 01:14:33.001419 | orchestrator | skipping: [testbed-node-4] 2025-05-29 01:14:33.001430 | orchestrator | skipping: [testbed-node-5] 2025-05-29 01:14:33.001436 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:14:33.001442 | orchestrator | skipping: [testbed-node-1] 2025-05-29 01:14:33.001448 | orchestrator | skipping: [testbed-node-2] 2025-05-29 01:14:33.001454 | orchestrator | 2025-05-29 01:14:33.001460 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-05-29 01:14:33.001467 | orchestrator | Thursday 29 May 2025 01:11:48 +0000 (0:00:00.904) 0:05:28.132 ********** 2025-05-29 01:14:33.001473 | orchestrator | 2025-05-29 01:14:33.001479 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-05-29 01:14:33.001485 | orchestrator | Thursday 29 May 2025 01:11:49 +0000 (0:00:00.138) 0:05:28.271 ********** 2025-05-29 01:14:33.001491 | orchestrator | 2025-05-29 01:14:33.001498 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-05-29 01:14:33.001504 | orchestrator | Thursday 29 May 2025 01:11:49 +0000 (0:00:00.302) 0:05:28.573 ********** 2025-05-29 01:14:33.001510 | orchestrator | 2025-05-29 01:14:33.001516 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-05-29 01:14:33.001522 | orchestrator | Thursday 29 May 2025 01:11:49 +0000 (0:00:00.110) 0:05:28.683 ********** 2025-05-29 01:14:33.001528 | orchestrator | 2025-05-29 01:14:33.001534 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-05-29 01:14:33.001541 | orchestrator | Thursday 29 May 2025 01:11:49 +0000 (0:00:00.293) 0:05:28.977 ********** 2025-05-29 01:14:33.001547 | orchestrator | 2025-05-29 01:14:33.001553 | orchestrator | TASK [nova-cell : Flush handlers] ********************************************** 2025-05-29 01:14:33.001559 | orchestrator | Thursday 29 May 2025 01:11:49 +0000 (0:00:00.110) 0:05:29.087 ********** 2025-05-29 01:14:33.001565 | orchestrator | 2025-05-29 01:14:33.001571 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-conductor container] ***************** 2025-05-29 01:14:33.001578 | orchestrator | Thursday 29 May 2025 01:11:50 +0000 (0:00:00.293) 0:05:29.381 ********** 2025-05-29 01:14:33.001584 | orchestrator | changed: [testbed-node-0] 2025-05-29 01:14:33.001590 | orchestrator | changed: [testbed-node-2] 2025-05-29 01:14:33.001596 | orchestrator | changed: [testbed-node-1] 2025-05-29 01:14:33.001602 | orchestrator | 2025-05-29 01:14:33.001608 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-novncproxy container] **************** 2025-05-29 01:14:33.001614 | orchestrator | Thursday 29 May 2025 01:11:57 +0000 (0:00:07.655) 0:05:37.037 ********** 2025-05-29 01:14:33.001620 | orchestrator | changed: [testbed-node-0] 2025-05-29 01:14:33.001627 | orchestrator | changed: [testbed-node-1] 2025-05-29 01:14:33.001633 | orchestrator | changed: [testbed-node-2] 2025-05-29 01:14:33.001639 | orchestrator | 2025-05-29 01:14:33.001645 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-ssh container] *********************** 2025-05-29 01:14:33.001652 | orchestrator | Thursday 29 May 2025 01:12:12 +0000 (0:00:15.173) 0:05:52.210 ********** 2025-05-29 01:14:33.001660 | orchestrator | changed: [testbed-node-5] 2025-05-29 01:14:33.001667 | orchestrator | changed: [testbed-node-4] 2025-05-29 01:14:33.001673 | orchestrator | changed: [testbed-node-3] 2025-05-29 01:14:33.001679 | orchestrator | 2025-05-29 01:14:33.001685 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-libvirt container] ******************* 2025-05-29 01:14:33.001692 | orchestrator | Thursday 29 May 2025 01:12:34 +0000 (0:00:21.685) 0:06:13.895 ********** 2025-05-29 01:14:33.001698 | orchestrator | changed: [testbed-node-4] 2025-05-29 01:14:33.001704 | orchestrator | changed: [testbed-node-5] 2025-05-29 01:14:33.001710 | orchestrator | changed: [testbed-node-3] 2025-05-29 01:14:33.001716 | orchestrator | 2025-05-29 01:14:33.001723 | orchestrator | RUNNING HANDLER [nova-cell : Checking libvirt container is ready] ************** 2025-05-29 01:14:33.001729 | orchestrator | Thursday 29 May 2025 01:13:02 +0000 (0:00:27.715) 0:06:41.610 ********** 2025-05-29 01:14:33.001735 | orchestrator | changed: [testbed-node-3] 2025-05-29 01:14:33.001740 | orchestrator | changed: [testbed-node-4] 2025-05-29 01:14:33.001745 | orchestrator | changed: [testbed-node-5] 2025-05-29 01:14:33.001751 | orchestrator | 2025-05-29 01:14:33.001756 | orchestrator | RUNNING HANDLER [nova-cell : Create libvirt SASL user] ************************* 2025-05-29 01:14:33.001765 | orchestrator | Thursday 29 May 2025 01:13:03 +0000 (0:00:00.762) 0:06:42.373 ********** 2025-05-29 01:14:33.001771 | orchestrator | changed: [testbed-node-3] 2025-05-29 01:14:33.001776 | orchestrator | changed: [testbed-node-4] 2025-05-29 01:14:33.001781 | orchestrator | changed: [testbed-node-5] 2025-05-29 01:14:33.001787 | orchestrator | 2025-05-29 01:14:33.001792 | orchestrator | RUNNING HANDLER [nova-cell : Restart nova-compute container] ******************* 2025-05-29 01:14:33.001801 | orchestrator | Thursday 29 May 2025 01:13:04 +0000 (0:00:00.938) 0:06:43.311 ********** 2025-05-29 01:14:33.001807 | orchestrator | changed: [testbed-node-4] 2025-05-29 01:14:33.001812 | orchestrator | changed: [testbed-node-5] 2025-05-29 01:14:33.001817 | orchestrator | changed: [testbed-node-3] 2025-05-29 01:14:33.001823 | orchestrator | 2025-05-29 01:14:33.001828 | orchestrator | RUNNING HANDLER [nova-cell : Wait for nova-compute services to update service versions] *** 2025-05-29 01:14:33.001834 | orchestrator | Thursday 29 May 2025 01:13:27 +0000 (0:00:23.175) 0:07:06.487 ********** 2025-05-29 01:14:33.001839 | orchestrator | skipping: [testbed-node-3] 2025-05-29 01:14:33.001844 | orchestrator | 2025-05-29 01:14:33.001850 | orchestrator | TASK [nova-cell : Waiting for nova-compute services to register themselves] **** 2025-05-29 01:14:33.001855 | orchestrator | Thursday 29 May 2025 01:13:27 +0000 (0:00:00.133) 0:07:06.620 ********** 2025-05-29 01:14:33.001860 | orchestrator | skipping: [testbed-node-5] 2025-05-29 01:14:33.001866 | orchestrator | skipping: [testbed-node-3] 2025-05-29 01:14:33.001871 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:14:33.001876 | orchestrator | skipping: [testbed-node-2] 2025-05-29 01:14:33.001882 | orchestrator | skipping: [testbed-node-1] 2025-05-29 01:14:33.001887 | orchestrator | FAILED - RETRYING: [testbed-node-4 -> testbed-node-0]: Waiting for nova-compute services to register themselves (20 retries left). 2025-05-29 01:14:33.001893 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2025-05-29 01:14:33.001902 | orchestrator | 2025-05-29 01:14:33.001911 | orchestrator | TASK [nova-cell : Fail if nova-compute service failed to register] ************* 2025-05-29 01:14:33.001921 | orchestrator | Thursday 29 May 2025 01:13:49 +0000 (0:00:22.324) 0:07:28.945 ********** 2025-05-29 01:14:33.001930 | orchestrator | skipping: [testbed-node-3] 2025-05-29 01:14:33.001938 | orchestrator | skipping: [testbed-node-5] 2025-05-29 01:14:33.001946 | orchestrator | skipping: [testbed-node-4] 2025-05-29 01:14:33.001955 | orchestrator | skipping: [testbed-node-1] 2025-05-29 01:14:33.001964 | orchestrator | skipping: [testbed-node-2] 2025-05-29 01:14:33.001972 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:14:33.001981 | orchestrator | 2025-05-29 01:14:33.001990 | orchestrator | TASK [nova-cell : Include discover_computes.yml] ******************************* 2025-05-29 01:14:33.002000 | orchestrator | Thursday 29 May 2025 01:13:59 +0000 (0:00:09.808) 0:07:38.754 ********** 2025-05-29 01:14:33.002007 | orchestrator | skipping: [testbed-node-5] 2025-05-29 01:14:33.002012 | orchestrator | skipping: [testbed-node-3] 2025-05-29 01:14:33.002039 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:14:33.002044 | orchestrator | skipping: [testbed-node-2] 2025-05-29 01:14:33.002050 | orchestrator | skipping: [testbed-node-1] 2025-05-29 01:14:33.002055 | orchestrator | included: /ansible/roles/nova-cell/tasks/discover_computes.yml for testbed-node-4 2025-05-29 01:14:33.002061 | orchestrator | 2025-05-29 01:14:33.002066 | orchestrator | TASK [nova-cell : Get a list of existing cells] ******************************** 2025-05-29 01:14:33.002072 | orchestrator | Thursday 29 May 2025 01:14:02 +0000 (0:00:03.116) 0:07:41.870 ********** 2025-05-29 01:14:33.002077 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2025-05-29 01:14:33.002083 | orchestrator | 2025-05-29 01:14:33.002089 | orchestrator | TASK [nova-cell : Extract current cell settings from list] ********************* 2025-05-29 01:14:33.002098 | orchestrator | Thursday 29 May 2025 01:14:12 +0000 (0:00:10.048) 0:07:51.919 ********** 2025-05-29 01:14:33.002107 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2025-05-29 01:14:33.002116 | orchestrator | 2025-05-29 01:14:33.002127 | orchestrator | TASK [nova-cell : Fail if cell settings not found] ***************************** 2025-05-29 01:14:33.002138 | orchestrator | Thursday 29 May 2025 01:14:13 +0000 (0:00:01.153) 0:07:53.072 ********** 2025-05-29 01:14:33.002143 | orchestrator | skipping: [testbed-node-4] 2025-05-29 01:14:33.002149 | orchestrator | 2025-05-29 01:14:33.002154 | orchestrator | TASK [nova-cell : Discover nova hosts] ***************************************** 2025-05-29 01:14:33.002159 | orchestrator | Thursday 29 May 2025 01:14:15 +0000 (0:00:01.205) 0:07:54.277 ********** 2025-05-29 01:14:33.002165 | orchestrator | ok: [testbed-node-4 -> testbed-node-0(192.168.16.10)] 2025-05-29 01:14:33.002170 | orchestrator | 2025-05-29 01:14:33.002175 | orchestrator | TASK [nova-cell : Remove old nova_libvirt_secrets container volume] ************ 2025-05-29 01:14:33.002181 | orchestrator | Thursday 29 May 2025 01:14:23 +0000 (0:00:08.694) 0:08:02.972 ********** 2025-05-29 01:14:33.002186 | orchestrator | ok: [testbed-node-3] 2025-05-29 01:14:33.002192 | orchestrator | ok: [testbed-node-4] 2025-05-29 01:14:33.002197 | orchestrator | ok: [testbed-node-5] 2025-05-29 01:14:33.002202 | orchestrator | ok: [testbed-node-0] 2025-05-29 01:14:33.002208 | orchestrator | ok: [testbed-node-1] 2025-05-29 01:14:33.002213 | orchestrator | ok: [testbed-node-2] 2025-05-29 01:14:33.002218 | orchestrator | 2025-05-29 01:14:33.002228 | orchestrator | PLAY [Refresh nova scheduler cell cache] *************************************** 2025-05-29 01:14:33.002276 | orchestrator | 2025-05-29 01:14:33.002282 | orchestrator | TASK [nova : Refresh cell cache in nova scheduler] ***************************** 2025-05-29 01:14:33.002287 | orchestrator | Thursday 29 May 2025 01:14:25 +0000 (0:00:02.101) 0:08:05.074 ********** 2025-05-29 01:14:33.002293 | orchestrator | changed: [testbed-node-0] 2025-05-29 01:14:33.002298 | orchestrator | changed: [testbed-node-1] 2025-05-29 01:14:33.002303 | orchestrator | changed: [testbed-node-2] 2025-05-29 01:14:33.002309 | orchestrator | 2025-05-29 01:14:33.002314 | orchestrator | PLAY [Reload global Nova super conductor services] ***************************** 2025-05-29 01:14:33.002320 | orchestrator | 2025-05-29 01:14:33.002325 | orchestrator | TASK [nova : Reload nova super conductor services to remove RPC version pin] *** 2025-05-29 01:14:33.002331 | orchestrator | Thursday 29 May 2025 01:14:26 +0000 (0:00:01.001) 0:08:06.076 ********** 2025-05-29 01:14:33.002336 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:14:33.002341 | orchestrator | skipping: [testbed-node-1] 2025-05-29 01:14:33.002347 | orchestrator | skipping: [testbed-node-2] 2025-05-29 01:14:33.002352 | orchestrator | 2025-05-29 01:14:33.002357 | orchestrator | PLAY [Reload Nova cell services] *********************************************** 2025-05-29 01:14:33.002363 | orchestrator | 2025-05-29 01:14:33.002368 | orchestrator | TASK [nova-cell : Reload nova cell services to remove RPC version cap] ********* 2025-05-29 01:14:33.002373 | orchestrator | Thursday 29 May 2025 01:14:27 +0000 (0:00:00.831) 0:08:06.907 ********** 2025-05-29 01:14:33.002379 | orchestrator | skipping: [testbed-node-3] => (item=nova-conductor)  2025-05-29 01:14:33.002384 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute)  2025-05-29 01:14:33.002393 | orchestrator | skipping: [testbed-node-3] => (item=nova-compute-ironic)  2025-05-29 01:14:33.002399 | orchestrator | skipping: [testbed-node-3] => (item=nova-novncproxy)  2025-05-29 01:14:33.002404 | orchestrator | skipping: [testbed-node-3] => (item=nova-serialproxy)  2025-05-29 01:14:33.002410 | orchestrator | skipping: [testbed-node-3] => (item=nova-spicehtml5proxy)  2025-05-29 01:14:33.002415 | orchestrator | skipping: [testbed-node-3] 2025-05-29 01:14:33.002420 | orchestrator | skipping: [testbed-node-4] => (item=nova-conductor)  2025-05-29 01:14:33.002426 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute)  2025-05-29 01:14:33.002431 | orchestrator | skipping: [testbed-node-4] => (item=nova-compute-ironic)  2025-05-29 01:14:33.002436 | orchestrator | skipping: [testbed-node-4] => (item=nova-novncproxy)  2025-05-29 01:14:33.002442 | orchestrator | skipping: [testbed-node-4] => (item=nova-serialproxy)  2025-05-29 01:14:33.002447 | orchestrator | skipping: [testbed-node-4] => (item=nova-spicehtml5proxy)  2025-05-29 01:14:33.002452 | orchestrator | skipping: [testbed-node-4] 2025-05-29 01:14:33.002458 | orchestrator | skipping: [testbed-node-5] => (item=nova-conductor)  2025-05-29 01:14:33.002467 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute)  2025-05-29 01:14:33.002473 | orchestrator | skipping: [testbed-node-5] => (item=nova-compute-ironic)  2025-05-29 01:14:33.002478 | orchestrator | skipping: [testbed-node-5] => (item=nova-novncproxy)  2025-05-29 01:14:33.002483 | orchestrator | skipping: [testbed-node-5] => (item=nova-serialproxy)  2025-05-29 01:14:33.002489 | orchestrator | skipping: [testbed-node-5] => (item=nova-spicehtml5proxy)  2025-05-29 01:14:33.002494 | orchestrator | skipping: [testbed-node-5] 2025-05-29 01:14:33.002499 | orchestrator | skipping: [testbed-node-0] => (item=nova-conductor)  2025-05-29 01:14:33.002505 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute)  2025-05-29 01:14:33.002510 | orchestrator | skipping: [testbed-node-0] => (item=nova-compute-ironic)  2025-05-29 01:14:33.002515 | orchestrator | skipping: [testbed-node-0] => (item=nova-novncproxy)  2025-05-29 01:14:33.002521 | orchestrator | skipping: [testbed-node-0] => (item=nova-serialproxy)  2025-05-29 01:14:33.002526 | orchestrator | skipping: [testbed-node-0] => (item=nova-spicehtml5proxy)  2025-05-29 01:14:33.002531 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:14:33.002537 | orchestrator | skipping: [testbed-node-1] => (item=nova-conductor)  2025-05-29 01:14:33.002542 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute)  2025-05-29 01:14:33.002547 | orchestrator | skipping: [testbed-node-1] => (item=nova-compute-ironic)  2025-05-29 01:14:33.002553 | orchestrator | skipping: [testbed-node-1] => (item=nova-novncproxy)  2025-05-29 01:14:33.002558 | orchestrator | skipping: [testbed-node-1] => (item=nova-serialproxy)  2025-05-29 01:14:33.002564 | orchestrator | skipping: [testbed-node-1] => (item=nova-spicehtml5proxy)  2025-05-29 01:14:33.002569 | orchestrator | skipping: [testbed-node-1] 2025-05-29 01:14:33.002574 | orchestrator | skipping: [testbed-node-2] => (item=nova-conductor)  2025-05-29 01:14:33.002580 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute)  2025-05-29 01:14:33.002585 | orchestrator | skipping: [testbed-node-2] => (item=nova-compute-ironic)  2025-05-29 01:14:33.002590 | orchestrator | skipping: [testbed-node-2] => (item=nova-novncproxy)  2025-05-29 01:14:33.002596 | orchestrator | skipping: [testbed-node-2] => (item=nova-serialproxy)  2025-05-29 01:14:33.002601 | orchestrator | skipping: [testbed-node-2] => (item=nova-spicehtml5proxy)  2025-05-29 01:14:33.002606 | orchestrator | skipping: [testbed-node-2] 2025-05-29 01:14:33.002612 | orchestrator | 2025-05-29 01:14:33.002617 | orchestrator | PLAY [Reload global Nova API services] ***************************************** 2025-05-29 01:14:33.002622 | orchestrator | 2025-05-29 01:14:33.002628 | orchestrator | TASK [nova : Reload nova API services to remove RPC version pin] *************** 2025-05-29 01:14:33.002633 | orchestrator | Thursday 29 May 2025 01:14:29 +0000 (0:00:01.392) 0:08:08.300 ********** 2025-05-29 01:14:33.002638 | orchestrator | skipping: [testbed-node-0] => (item=nova-scheduler)  2025-05-29 01:14:33.002644 | orchestrator | skipping: [testbed-node-0] => (item=nova-api)  2025-05-29 01:14:33.002650 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:14:33.002655 | orchestrator | skipping: [testbed-node-1] => (item=nova-scheduler)  2025-05-29 01:14:33.002660 | orchestrator | skipping: [testbed-node-1] => (item=nova-api)  2025-05-29 01:14:33.002666 | orchestrator | skipping: [testbed-node-1] 2025-05-29 01:14:33.002674 | orchestrator | skipping: [testbed-node-2] => (item=nova-scheduler)  2025-05-29 01:14:33.002680 | orchestrator | skipping: [testbed-node-2] => (item=nova-api)  2025-05-29 01:14:33.002686 | orchestrator | skipping: [testbed-node-2] 2025-05-29 01:14:33.002691 | orchestrator | 2025-05-29 01:14:33.002697 | orchestrator | PLAY [Run Nova API online data migrations] ************************************* 2025-05-29 01:14:33.002702 | orchestrator | 2025-05-29 01:14:33.002708 | orchestrator | TASK [nova : Run Nova API online database migrations] ************************** 2025-05-29 01:14:33.002713 | orchestrator | Thursday 29 May 2025 01:14:29 +0000 (0:00:00.801) 0:08:09.101 ********** 2025-05-29 01:14:33.002718 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:14:33.002728 | orchestrator | 2025-05-29 01:14:33.002733 | orchestrator | PLAY [Run Nova cell online data migrations] ************************************ 2025-05-29 01:14:33.002738 | orchestrator | 2025-05-29 01:14:33.002744 | orchestrator | TASK [nova-cell : Run Nova cell online database migrations] ******************** 2025-05-29 01:14:33.002749 | orchestrator | Thursday 29 May 2025 01:14:30 +0000 (0:00:00.921) 0:08:10.022 ********** 2025-05-29 01:14:33.002755 | orchestrator | skipping: [testbed-node-0] 2025-05-29 01:14:33.002760 | orchestrator | skipping: [testbed-node-1] 2025-05-29 01:14:33.002765 | orchestrator | skipping: [testbed-node-2] 2025-05-29 01:14:33.002771 | orchestrator | 2025-05-29 01:14:33.002776 | orchestrator | PLAY RECAP ********************************************************************* 2025-05-29 01:14:33.002782 | orchestrator | testbed-manager : ok=3  changed=3  unreachable=0 failed=0 skipped=0 rescued=0 ignored=0 2025-05-29 01:14:33.002791 | orchestrator | testbed-node-0 : ok=54  changed=35  unreachable=0 failed=0 skipped=44  rescued=0 ignored=0 2025-05-29 01:14:33.002797 | orchestrator | testbed-node-1 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2025-05-29 01:14:33.002802 | orchestrator | testbed-node-2 : ok=27  changed=19  unreachable=0 failed=0 skipped=51  rescued=0 ignored=0 2025-05-29 01:14:33.002808 | orchestrator | testbed-node-3 : ok=38  changed=27  unreachable=0 failed=0 skipped=21  rescued=0 ignored=0 2025-05-29 01:14:33.002813 | orchestrator | testbed-node-4 : ok=42  changed=27  unreachable=0 failed=0 skipped=18  rescued=0 ignored=0 2025-05-29 01:14:33.002819 | orchestrator | testbed-node-5 : ok=37  changed=27  unreachable=0 failed=0 skipped=19  rescued=0 ignored=0 2025-05-29 01:14:33.002824 | orchestrator | 2025-05-29 01:14:33.002830 | orchestrator | 2025-05-29 01:14:33.002835 | orchestrator | TASKS RECAP ******************************************************************** 2025-05-29 01:14:33.002840 | orchestrator | Thursday 29 May 2025 01:14:31 +0000 (0:00:00.551) 0:08:10.574 ********** 2025-05-29 01:14:33.002846 | orchestrator | =============================================================================== 2025-05-29 01:14:33.002851 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 29.29s 2025-05-29 01:14:33.002857 | orchestrator | nova-cell : Restart nova-libvirt container ----------------------------- 27.72s 2025-05-29 01:14:33.002862 | orchestrator | nova-cell : Restart nova-compute container ----------------------------- 23.18s 2025-05-29 01:14:33.002868 | orchestrator | nova-cell : Waiting for nova-compute services to register themselves --- 22.32s 2025-05-29 01:14:33.002873 | orchestrator | nova : Restart nova-scheduler container -------------------------------- 21.87s 2025-05-29 01:14:33.002878 | orchestrator | nova-cell : Restart nova-ssh container --------------------------------- 21.69s 2025-05-29 01:14:33.002883 | orchestrator | nova-cell : Running Nova cell bootstrap container ---------------------- 19.12s 2025-05-29 01:14:33.002889 | orchestrator | nova : Running Nova API bootstrap container ---------------------------- 15.88s 2025-05-29 01:14:33.002894 | orchestrator | nova-cell : Restart nova-novncproxy container -------------------------- 15.17s 2025-05-29 01:14:33.002900 | orchestrator | nova : Create cell0 mappings ------------------------------------------- 13.50s 2025-05-29 01:14:33.002905 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 12.04s 2025-05-29 01:14:33.002910 | orchestrator | nova-cell : Create cell ------------------------------------------------ 10.36s 2025-05-29 01:14:33.002916 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 10.11s 2025-05-29 01:14:33.002921 | orchestrator | nova-cell : Get a list of existing cells ------------------------------- 10.05s 2025-05-29 01:14:33.002926 | orchestrator | nova-cell : Copying files for nova-ssh ---------------------------------- 9.92s 2025-05-29 01:14:33.002935 | orchestrator | nova-cell : Fail if nova-compute service failed to register ------------- 9.81s 2025-05-29 01:14:33.002941 | orchestrator | nova-cell : Discover nova hosts ----------------------------------------- 8.69s 2025-05-29 01:14:33.002946 | orchestrator | service-rabbitmq : nova | Ensure RabbitMQ users exist ------------------- 8.39s 2025-05-29 01:14:33.002951 | orchestrator | service-ks-register : nova | Granting user roles ------------------------ 8.35s 2025-05-29 01:14:33.002957 | orchestrator | nova-cell : Restart nova-conductor container ---------------------------- 7.66s 2025-05-29 01:14:33.002962 | orchestrator | 2025-05-29 01:14:32 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:14:33.002968 | orchestrator | 2025-05-29 01:14:32 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:14:36.039064 | orchestrator | 2025-05-29 01:14:36 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:14:36.039176 | orchestrator | 2025-05-29 01:14:36 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:14:39.087587 | orchestrator | 2025-05-29 01:14:39 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:14:39.087684 | orchestrator | 2025-05-29 01:14:39 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:14:42.140718 | orchestrator | 2025-05-29 01:14:42 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:14:42.140827 | orchestrator | 2025-05-29 01:14:42 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:14:45.185574 | orchestrator | 2025-05-29 01:14:45 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:14:45.185659 | orchestrator | 2025-05-29 01:14:45 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:14:48.236727 | orchestrator | 2025-05-29 01:14:48 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:14:48.236840 | orchestrator | 2025-05-29 01:14:48 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:14:51.290852 | orchestrator | 2025-05-29 01:14:51 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:14:51.290953 | orchestrator | 2025-05-29 01:14:51 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:14:54.349709 | orchestrator | 2025-05-29 01:14:54 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:14:54.349844 | orchestrator | 2025-05-29 01:14:54 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:14:57.397678 | orchestrator | 2025-05-29 01:14:57 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:14:57.397782 | orchestrator | 2025-05-29 01:14:57 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:15:00.442674 | orchestrator | 2025-05-29 01:15:00 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:15:00.442757 | orchestrator | 2025-05-29 01:15:00 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:15:03.497432 | orchestrator | 2025-05-29 01:15:03 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:15:03.497519 | orchestrator | 2025-05-29 01:15:03 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:15:06.548859 | orchestrator | 2025-05-29 01:15:06 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:15:06.549006 | orchestrator | 2025-05-29 01:15:06 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:15:09.602430 | orchestrator | 2025-05-29 01:15:09 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:15:09.602572 | orchestrator | 2025-05-29 01:15:09 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:15:12.652206 | orchestrator | 2025-05-29 01:15:12 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:15:12.652344 | orchestrator | 2025-05-29 01:15:12 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:15:15.701379 | orchestrator | 2025-05-29 01:15:15 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:15:15.701481 | orchestrator | 2025-05-29 01:15:15 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:15:18.761128 | orchestrator | 2025-05-29 01:15:18 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:15:18.761241 | orchestrator | 2025-05-29 01:15:18 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:15:21.818278 | orchestrator | 2025-05-29 01:15:21 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:15:21.818478 | orchestrator | 2025-05-29 01:15:21 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:15:24.865608 | orchestrator | 2025-05-29 01:15:24 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:15:24.865727 | orchestrator | 2025-05-29 01:15:24 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:15:27.908819 | orchestrator | 2025-05-29 01:15:27 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:15:27.908922 | orchestrator | 2025-05-29 01:15:27 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:15:30.954858 | orchestrator | 2025-05-29 01:15:30 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:15:30.954965 | orchestrator | 2025-05-29 01:15:30 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:15:34.006743 | orchestrator | 2025-05-29 01:15:34 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:15:34.006857 | orchestrator | 2025-05-29 01:15:34 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:15:37.070843 | orchestrator | 2025-05-29 01:15:37 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:15:37.070953 | orchestrator | 2025-05-29 01:15:37 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:15:40.123490 | orchestrator | 2025-05-29 01:15:40 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:15:40.123598 | orchestrator | 2025-05-29 01:15:40 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:15:43.182387 | orchestrator | 2025-05-29 01:15:43 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:15:43.182499 | orchestrator | 2025-05-29 01:15:43 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:15:46.234195 | orchestrator | 2025-05-29 01:15:46 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:15:46.234282 | orchestrator | 2025-05-29 01:15:46 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:15:49.292486 | orchestrator | 2025-05-29 01:15:49 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:15:49.292597 | orchestrator | 2025-05-29 01:15:49 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:15:52.339043 | orchestrator | 2025-05-29 01:15:52 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:15:52.339144 | orchestrator | 2025-05-29 01:15:52 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:15:55.389231 | orchestrator | 2025-05-29 01:15:55 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:15:55.389340 | orchestrator | 2025-05-29 01:15:55 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:15:58.440074 | orchestrator | 2025-05-29 01:15:58 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:15:58.440192 | orchestrator | 2025-05-29 01:15:58 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:16:01.491593 | orchestrator | 2025-05-29 01:16:01 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:16:01.491701 | orchestrator | 2025-05-29 01:16:01 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:16:04.545444 | orchestrator | 2025-05-29 01:16:04 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:16:04.545549 | orchestrator | 2025-05-29 01:16:04 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:16:07.590843 | orchestrator | 2025-05-29 01:16:07 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:16:07.590937 | orchestrator | 2025-05-29 01:16:07 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:16:10.637591 | orchestrator | 2025-05-29 01:16:10 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:16:10.637679 | orchestrator | 2025-05-29 01:16:10 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:16:13.683081 | orchestrator | 2025-05-29 01:16:13 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:16:13.683187 | orchestrator | 2025-05-29 01:16:13 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:16:16.721638 | orchestrator | 2025-05-29 01:16:16 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:16:16.721724 | orchestrator | 2025-05-29 01:16:16 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:16:19.779484 | orchestrator | 2025-05-29 01:16:19 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:16:19.779599 | orchestrator | 2025-05-29 01:16:19 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:16:22.828980 | orchestrator | 2025-05-29 01:16:22 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:16:22.829086 | orchestrator | 2025-05-29 01:16:22 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:16:25.876139 | orchestrator | 2025-05-29 01:16:25 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:16:25.876244 | orchestrator | 2025-05-29 01:16:25 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:16:28.928091 | orchestrator | 2025-05-29 01:16:28 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:16:28.928320 | orchestrator | 2025-05-29 01:16:28 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:16:31.977229 | orchestrator | 2025-05-29 01:16:31 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:16:31.977357 | orchestrator | 2025-05-29 01:16:31 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:16:35.026466 | orchestrator | 2025-05-29 01:16:35 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:16:35.026586 | orchestrator | 2025-05-29 01:16:35 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:16:38.079652 | orchestrator | 2025-05-29 01:16:38 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:16:38.079760 | orchestrator | 2025-05-29 01:16:38 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:16:41.124280 | orchestrator | 2025-05-29 01:16:41 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:16:41.124381 | orchestrator | 2025-05-29 01:16:41 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:16:44.173539 | orchestrator | 2025-05-29 01:16:44 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:16:44.173631 | orchestrator | 2025-05-29 01:16:44 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:16:47.222526 | orchestrator | 2025-05-29 01:16:47 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:16:47.222733 | orchestrator | 2025-05-29 01:16:47 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:16:50.279616 | orchestrator | 2025-05-29 01:16:50 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:16:50.279733 | orchestrator | 2025-05-29 01:16:50 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:16:53.321796 | orchestrator | 2025-05-29 01:16:53 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:16:53.321899 | orchestrator | 2025-05-29 01:16:53 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:16:56.362971 | orchestrator | 2025-05-29 01:16:56 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:16:56.363075 | orchestrator | 2025-05-29 01:16:56 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:16:59.416136 | orchestrator | 2025-05-29 01:16:59 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:16:59.416244 | orchestrator | 2025-05-29 01:16:59 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:17:02.463334 | orchestrator | 2025-05-29 01:17:02 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:17:02.463428 | orchestrator | 2025-05-29 01:17:02 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:17:05.523601 | orchestrator | 2025-05-29 01:17:05 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:17:05.523689 | orchestrator | 2025-05-29 01:17:05 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:17:08.575213 | orchestrator | 2025-05-29 01:17:08 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:17:08.575323 | orchestrator | 2025-05-29 01:17:08 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:17:11.625530 | orchestrator | 2025-05-29 01:17:11 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:17:11.625636 | orchestrator | 2025-05-29 01:17:11 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:17:14.674629 | orchestrator | 2025-05-29 01:17:14 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:17:14.674737 | orchestrator | 2025-05-29 01:17:14 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:17:17.725703 | orchestrator | 2025-05-29 01:17:17 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:17:17.725814 | orchestrator | 2025-05-29 01:17:17 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:17:20.783966 | orchestrator | 2025-05-29 01:17:20 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:17:20.784072 | orchestrator | 2025-05-29 01:17:20 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:17:23.828957 | orchestrator | 2025-05-29 01:17:23 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:17:23.829056 | orchestrator | 2025-05-29 01:17:23 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:17:26.878095 | orchestrator | 2025-05-29 01:17:26 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:17:26.878200 | orchestrator | 2025-05-29 01:17:26 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:17:29.927365 | orchestrator | 2025-05-29 01:17:29 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:17:29.927467 | orchestrator | 2025-05-29 01:17:29 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:17:32.977625 | orchestrator | 2025-05-29 01:17:32 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:17:32.977733 | orchestrator | 2025-05-29 01:17:32 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:17:36.040606 | orchestrator | 2025-05-29 01:17:36 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:17:36.040716 | orchestrator | 2025-05-29 01:17:36 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:17:39.091624 | orchestrator | 2025-05-29 01:17:39 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:17:39.091724 | orchestrator | 2025-05-29 01:17:39 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:17:42.135377 | orchestrator | 2025-05-29 01:17:42 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:17:42.135571 | orchestrator | 2025-05-29 01:17:42 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:17:45.177821 | orchestrator | 2025-05-29 01:17:45 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:17:45.177910 | orchestrator | 2025-05-29 01:17:45 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:17:48.227855 | orchestrator | 2025-05-29 01:17:48 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:17:48.227962 | orchestrator | 2025-05-29 01:17:48 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:17:51.279265 | orchestrator | 2025-05-29 01:17:51 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:17:51.279373 | orchestrator | 2025-05-29 01:17:51 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:17:54.333069 | orchestrator | 2025-05-29 01:17:54 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:17:54.333180 | orchestrator | 2025-05-29 01:17:54 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:17:57.388976 | orchestrator | 2025-05-29 01:17:57 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:17:57.389081 | orchestrator | 2025-05-29 01:17:57 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:18:00.446675 | orchestrator | 2025-05-29 01:18:00 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:18:00.446800 | orchestrator | 2025-05-29 01:18:00 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:18:03.496616 | orchestrator | 2025-05-29 01:18:03 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:18:03.496707 | orchestrator | 2025-05-29 01:18:03 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:18:06.551397 | orchestrator | 2025-05-29 01:18:06 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:18:06.551582 | orchestrator | 2025-05-29 01:18:06 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:18:09.603114 | orchestrator | 2025-05-29 01:18:09 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:18:09.603382 | orchestrator | 2025-05-29 01:18:09 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:18:12.656154 | orchestrator | 2025-05-29 01:18:12 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:18:12.656279 | orchestrator | 2025-05-29 01:18:12 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:18:15.703267 | orchestrator | 2025-05-29 01:18:15 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:18:15.703377 | orchestrator | 2025-05-29 01:18:15 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:18:18.755251 | orchestrator | 2025-05-29 01:18:18 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:18:18.755356 | orchestrator | 2025-05-29 01:18:18 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:18:21.806487 | orchestrator | 2025-05-29 01:18:21 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:18:21.806655 | orchestrator | 2025-05-29 01:18:21 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:18:24.858637 | orchestrator | 2025-05-29 01:18:24 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:18:24.858746 | orchestrator | 2025-05-29 01:18:24 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:18:27.906337 | orchestrator | 2025-05-29 01:18:27 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:18:27.906451 | orchestrator | 2025-05-29 01:18:27 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:18:30.962918 | orchestrator | 2025-05-29 01:18:30 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:18:30.963001 | orchestrator | 2025-05-29 01:18:30 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:18:34.015636 | orchestrator | 2025-05-29 01:18:34 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:18:34.015746 | orchestrator | 2025-05-29 01:18:34 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:18:37.066965 | orchestrator | 2025-05-29 01:18:37 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:18:37.067047 | orchestrator | 2025-05-29 01:18:37 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:18:40.122925 | orchestrator | 2025-05-29 01:18:40 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:18:40.123054 | orchestrator | 2025-05-29 01:18:40 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:18:43.177651 | orchestrator | 2025-05-29 01:18:43 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:18:43.177741 | orchestrator | 2025-05-29 01:18:43 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:18:46.227601 | orchestrator | 2025-05-29 01:18:46 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:18:46.229278 | orchestrator | 2025-05-29 01:18:46 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:18:49.288823 | orchestrator | 2025-05-29 01:18:49 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:18:49.288928 | orchestrator | 2025-05-29 01:18:49 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:18:52.332003 | orchestrator | 2025-05-29 01:18:52 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:18:52.332145 | orchestrator | 2025-05-29 01:18:52 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:18:55.379757 | orchestrator | 2025-05-29 01:18:55 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:18:55.379869 | orchestrator | 2025-05-29 01:18:55 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:18:58.440194 | orchestrator | 2025-05-29 01:18:58 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:18:58.440310 | orchestrator | 2025-05-29 01:18:58 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:19:01.507032 | orchestrator | 2025-05-29 01:19:01 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:19:01.507119 | orchestrator | 2025-05-29 01:19:01 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:19:04.564726 | orchestrator | 2025-05-29 01:19:04 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:19:04.564832 | orchestrator | 2025-05-29 01:19:04 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:19:07.632469 | orchestrator | 2025-05-29 01:19:07 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:19:07.632617 | orchestrator | 2025-05-29 01:19:07 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:19:10.688734 | orchestrator | 2025-05-29 01:19:10 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:19:10.688838 | orchestrator | 2025-05-29 01:19:10 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:19:13.747458 | orchestrator | 2025-05-29 01:19:13 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:19:13.747565 | orchestrator | 2025-05-29 01:19:13 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:19:16.805337 | orchestrator | 2025-05-29 01:19:16 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:19:16.805442 | orchestrator | 2025-05-29 01:19:16 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:19:19.859769 | orchestrator | 2025-05-29 01:19:19 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:19:19.859870 | orchestrator | 2025-05-29 01:19:19 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:19:22.912722 | orchestrator | 2025-05-29 01:19:22 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:19:22.912816 | orchestrator | 2025-05-29 01:19:22 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:19:25.966457 | orchestrator | 2025-05-29 01:19:25 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:19:25.966560 | orchestrator | 2025-05-29 01:19:25 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:19:29.015684 | orchestrator | 2025-05-29 01:19:29 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:19:29.015794 | orchestrator | 2025-05-29 01:19:29 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:19:32.064043 | orchestrator | 2025-05-29 01:19:32 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:19:32.064167 | orchestrator | 2025-05-29 01:19:32 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:19:35.118665 | orchestrator | 2025-05-29 01:19:35 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:19:35.118774 | orchestrator | 2025-05-29 01:19:35 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:19:38.171175 | orchestrator | 2025-05-29 01:19:38 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:19:38.171299 | orchestrator | 2025-05-29 01:19:38 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:19:41.219014 | orchestrator | 2025-05-29 01:19:41 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:19:41.219133 | orchestrator | 2025-05-29 01:19:41 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:19:44.272032 | orchestrator | 2025-05-29 01:19:44 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:19:44.272124 | orchestrator | 2025-05-29 01:19:44 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:19:47.324216 | orchestrator | 2025-05-29 01:19:47 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:19:47.324319 | orchestrator | 2025-05-29 01:19:47 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:19:50.372070 | orchestrator | 2025-05-29 01:19:50 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:19:50.372171 | orchestrator | 2025-05-29 01:19:50 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:19:53.423445 | orchestrator | 2025-05-29 01:19:53 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:19:53.423594 | orchestrator | 2025-05-29 01:19:53 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:19:56.476194 | orchestrator | 2025-05-29 01:19:56 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:19:56.476294 | orchestrator | 2025-05-29 01:19:56 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:19:59.531581 | orchestrator | 2025-05-29 01:19:59 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:19:59.531668 | orchestrator | 2025-05-29 01:19:59 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:20:02.578980 | orchestrator | 2025-05-29 01:20:02 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:20:02.579084 | orchestrator | 2025-05-29 01:20:02 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:20:05.631912 | orchestrator | 2025-05-29 01:20:05 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:20:05.632020 | orchestrator | 2025-05-29 01:20:05 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:20:08.682459 | orchestrator | 2025-05-29 01:20:08 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:20:08.682559 | orchestrator | 2025-05-29 01:20:08 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:20:11.729253 | orchestrator | 2025-05-29 01:20:11 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:20:11.729357 | orchestrator | 2025-05-29 01:20:11 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:20:14.777056 | orchestrator | 2025-05-29 01:20:14 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:20:14.777155 | orchestrator | 2025-05-29 01:20:14 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:20:17.828446 | orchestrator | 2025-05-29 01:20:17 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:20:17.828572 | orchestrator | 2025-05-29 01:20:17 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:20:20.884715 | orchestrator | 2025-05-29 01:20:20 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:20:20.884815 | orchestrator | 2025-05-29 01:20:20 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:20:23.938377 | orchestrator | 2025-05-29 01:20:23 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:20:23.938742 | orchestrator | 2025-05-29 01:20:23 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:20:26.997946 | orchestrator | 2025-05-29 01:20:26 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:20:26.998068 | orchestrator | 2025-05-29 01:20:26 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:20:30.049020 | orchestrator | 2025-05-29 01:20:30 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:20:30.049258 | orchestrator | 2025-05-29 01:20:30 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:20:33.095315 | orchestrator | 2025-05-29 01:20:33 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:20:33.095427 | orchestrator | 2025-05-29 01:20:33 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:20:36.147324 | orchestrator | 2025-05-29 01:20:36 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:20:36.147440 | orchestrator | 2025-05-29 01:20:36 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:20:39.202917 | orchestrator | 2025-05-29 01:20:39 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:20:39.203029 | orchestrator | 2025-05-29 01:20:39 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:20:42.248269 | orchestrator | 2025-05-29 01:20:42 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:20:42.248373 | orchestrator | 2025-05-29 01:20:42 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:20:45.302134 | orchestrator | 2025-05-29 01:20:45 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:20:45.302245 | orchestrator | 2025-05-29 01:20:45 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:20:48.358993 | orchestrator | 2025-05-29 01:20:48 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:20:48.359077 | orchestrator | 2025-05-29 01:20:48 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:20:51.414337 | orchestrator | 2025-05-29 01:20:51 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:20:51.414442 | orchestrator | 2025-05-29 01:20:51 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:20:54.468422 | orchestrator | 2025-05-29 01:20:54 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:20:54.468523 | orchestrator | 2025-05-29 01:20:54 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:20:57.517995 | orchestrator | 2025-05-29 01:20:57 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:20:57.518157 | orchestrator | 2025-05-29 01:20:57 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:21:00.563591 | orchestrator | 2025-05-29 01:21:00 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:21:00.563720 | orchestrator | 2025-05-29 01:21:00 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:21:03.610727 | orchestrator | 2025-05-29 01:21:03 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:21:03.610820 | orchestrator | 2025-05-29 01:21:03 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:21:06.657412 | orchestrator | 2025-05-29 01:21:06 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:21:06.657522 | orchestrator | 2025-05-29 01:21:06 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:21:09.705595 | orchestrator | 2025-05-29 01:21:09 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:21:09.705726 | orchestrator | 2025-05-29 01:21:09 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:21:12.750905 | orchestrator | 2025-05-29 01:21:12 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:21:12.751008 | orchestrator | 2025-05-29 01:21:12 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:21:15.807476 | orchestrator | 2025-05-29 01:21:15 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:21:15.807653 | orchestrator | 2025-05-29 01:21:15 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:21:18.852779 | orchestrator | 2025-05-29 01:21:18 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:21:18.852881 | orchestrator | 2025-05-29 01:21:18 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:21:21.909585 | orchestrator | 2025-05-29 01:21:21 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:21:21.909739 | orchestrator | 2025-05-29 01:21:21 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:21:24.963890 | orchestrator | 2025-05-29 01:21:24 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:21:24.963994 | orchestrator | 2025-05-29 01:21:24 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:21:28.008177 | orchestrator | 2025-05-29 01:21:28 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:21:28.008283 | orchestrator | 2025-05-29 01:21:28 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:21:31.060298 | orchestrator | 2025-05-29 01:21:31 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:21:31.060401 | orchestrator | 2025-05-29 01:21:31 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:21:34.111771 | orchestrator | 2025-05-29 01:21:34 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:21:34.111902 | orchestrator | 2025-05-29 01:21:34 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:21:37.167241 | orchestrator | 2025-05-29 01:21:37 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:21:37.167357 | orchestrator | 2025-05-29 01:21:37 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:21:40.217766 | orchestrator | 2025-05-29 01:21:40 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:21:40.217881 | orchestrator | 2025-05-29 01:21:40 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:21:43.272503 | orchestrator | 2025-05-29 01:21:43 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:21:43.272607 | orchestrator | 2025-05-29 01:21:43 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:21:46.320602 | orchestrator | 2025-05-29 01:21:46 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:21:46.320747 | orchestrator | 2025-05-29 01:21:46 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:21:49.367813 | orchestrator | 2025-05-29 01:21:49 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:21:49.367915 | orchestrator | 2025-05-29 01:21:49 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:21:52.416067 | orchestrator | 2025-05-29 01:21:52 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:21:52.416169 | orchestrator | 2025-05-29 01:21:52 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:21:55.470205 | orchestrator | 2025-05-29 01:21:55 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:21:55.470307 | orchestrator | 2025-05-29 01:21:55 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:21:58.519169 | orchestrator | 2025-05-29 01:21:58 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:21:58.519243 | orchestrator | 2025-05-29 01:21:58 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:22:01.567057 | orchestrator | 2025-05-29 01:22:01 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:22:01.567188 | orchestrator | 2025-05-29 01:22:01 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:22:04.614948 | orchestrator | 2025-05-29 01:22:04 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:22:04.615073 | orchestrator | 2025-05-29 01:22:04 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:22:07.669048 | orchestrator | 2025-05-29 01:22:07 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:22:07.669152 | orchestrator | 2025-05-29 01:22:07 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:22:10.718802 | orchestrator | 2025-05-29 01:22:10 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:22:10.718915 | orchestrator | 2025-05-29 01:22:10 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:22:13.777166 | orchestrator | 2025-05-29 01:22:13 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:22:13.777253 | orchestrator | 2025-05-29 01:22:13 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:22:16.821240 | orchestrator | 2025-05-29 01:22:16 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:22:16.821337 | orchestrator | 2025-05-29 01:22:16 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:22:19.868402 | orchestrator | 2025-05-29 01:22:19 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:22:19.868487 | orchestrator | 2025-05-29 01:22:19 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:22:22.917413 | orchestrator | 2025-05-29 01:22:22 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:22:22.917522 | orchestrator | 2025-05-29 01:22:22 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:22:25.973558 | orchestrator | 2025-05-29 01:22:25 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:22:25.973659 | orchestrator | 2025-05-29 01:22:25 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:22:29.022312 | orchestrator | 2025-05-29 01:22:29 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:22:29.022425 | orchestrator | 2025-05-29 01:22:29 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:22:32.066796 | orchestrator | 2025-05-29 01:22:32 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:22:32.066928 | orchestrator | 2025-05-29 01:22:32 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:22:35.113577 | orchestrator | 2025-05-29 01:22:35 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:22:35.113684 | orchestrator | 2025-05-29 01:22:35 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:22:38.163512 | orchestrator | 2025-05-29 01:22:38 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:22:38.163618 | orchestrator | 2025-05-29 01:22:38 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:22:41.213615 | orchestrator | 2025-05-29 01:22:41 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:22:41.213719 | orchestrator | 2025-05-29 01:22:41 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:22:44.266816 | orchestrator | 2025-05-29 01:22:44 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:22:44.266903 | orchestrator | 2025-05-29 01:22:44 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:22:47.312826 | orchestrator | 2025-05-29 01:22:47 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:22:47.312982 | orchestrator | 2025-05-29 01:22:47 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:22:50.365510 | orchestrator | 2025-05-29 01:22:50 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:22:50.365613 | orchestrator | 2025-05-29 01:22:50 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:22:53.418005 | orchestrator | 2025-05-29 01:22:53 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:22:53.418135 | orchestrator | 2025-05-29 01:22:53 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:22:56.472461 | orchestrator | 2025-05-29 01:22:56 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:22:56.472543 | orchestrator | 2025-05-29 01:22:56 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:22:59.525594 | orchestrator | 2025-05-29 01:22:59 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:22:59.525701 | orchestrator | 2025-05-29 01:22:59 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:23:02.579412 | orchestrator | 2025-05-29 01:23:02 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:23:02.579511 | orchestrator | 2025-05-29 01:23:02 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:23:05.635687 | orchestrator | 2025-05-29 01:23:05 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:23:05.635816 | orchestrator | 2025-05-29 01:23:05 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:23:08.687453 | orchestrator | 2025-05-29 01:23:08 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:23:08.687584 | orchestrator | 2025-05-29 01:23:08 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:23:11.742265 | orchestrator | 2025-05-29 01:23:11 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:23:11.742384 | orchestrator | 2025-05-29 01:23:11 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:23:14.800213 | orchestrator | 2025-05-29 01:23:14 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:23:14.801160 | orchestrator | 2025-05-29 01:23:14 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:23:17.842806 | orchestrator | 2025-05-29 01:23:17 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:23:17.842910 | orchestrator | 2025-05-29 01:23:17 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:23:20.897996 | orchestrator | 2025-05-29 01:23:20 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:23:20.898184 | orchestrator | 2025-05-29 01:23:20 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:23:23.948698 | orchestrator | 2025-05-29 01:23:23 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:23:23.948863 | orchestrator | 2025-05-29 01:23:23 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:23:26.996650 | orchestrator | 2025-05-29 01:23:26 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:23:26.996840 | orchestrator | 2025-05-29 01:23:26 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:23:30.047036 | orchestrator | 2025-05-29 01:23:30 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:23:30.047165 | orchestrator | 2025-05-29 01:23:30 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:23:33.097855 | orchestrator | 2025-05-29 01:23:33 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:23:33.098010 | orchestrator | 2025-05-29 01:23:33 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:23:36.144646 | orchestrator | 2025-05-29 01:23:36 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:23:36.144740 | orchestrator | 2025-05-29 01:23:36 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:23:39.198191 | orchestrator | 2025-05-29 01:23:39 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:23:39.198295 | orchestrator | 2025-05-29 01:23:39 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:23:42.236683 | orchestrator | 2025-05-29 01:23:42 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:23:42.236854 | orchestrator | 2025-05-29 01:23:42 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:23:45.285292 | orchestrator | 2025-05-29 01:23:45 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:23:45.285401 | orchestrator | 2025-05-29 01:23:45 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:23:48.335975 | orchestrator | 2025-05-29 01:23:48 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:23:48.336083 | orchestrator | 2025-05-29 01:23:48 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:23:51.388557 | orchestrator | 2025-05-29 01:23:51 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:23:51.388658 | orchestrator | 2025-05-29 01:23:51 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:23:54.442370 | orchestrator | 2025-05-29 01:23:54 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:23:54.442482 | orchestrator | 2025-05-29 01:23:54 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:23:57.489097 | orchestrator | 2025-05-29 01:23:57 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:23:57.489197 | orchestrator | 2025-05-29 01:23:57 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:24:00.550870 | orchestrator | 2025-05-29 01:24:00 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:24:00.550995 | orchestrator | 2025-05-29 01:24:00 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:24:03.598381 | orchestrator | 2025-05-29 01:24:03 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:24:03.598476 | orchestrator | 2025-05-29 01:24:03 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:24:06.656695 | orchestrator | 2025-05-29 01:24:06 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:24:06.658364 | orchestrator | 2025-05-29 01:24:06 | INFO  | Task 36b772ff-757e-42c1-99a3-a7c13534d56f is in state STARTED 2025-05-29 01:24:06.658468 | orchestrator | 2025-05-29 01:24:06 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:24:09.719293 | orchestrator | 2025-05-29 01:24:09 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:24:09.720407 | orchestrator | 2025-05-29 01:24:09 | INFO  | Task 36b772ff-757e-42c1-99a3-a7c13534d56f is in state STARTED 2025-05-29 01:24:09.720453 | orchestrator | 2025-05-29 01:24:09 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:24:12.784273 | orchestrator | 2025-05-29 01:24:12 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:24:12.786173 | orchestrator | 2025-05-29 01:24:12 | INFO  | Task 36b772ff-757e-42c1-99a3-a7c13534d56f is in state STARTED 2025-05-29 01:24:12.786383 | orchestrator | 2025-05-29 01:24:12 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:24:15.846905 | orchestrator | 2025-05-29 01:24:15 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:24:15.849129 | orchestrator | 2025-05-29 01:24:15 | INFO  | Task 36b772ff-757e-42c1-99a3-a7c13534d56f is in state STARTED 2025-05-29 01:24:15.849158 | orchestrator | 2025-05-29 01:24:15 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:24:18.897403 | orchestrator | 2025-05-29 01:24:18 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:24:18.897503 | orchestrator | 2025-05-29 01:24:18 | INFO  | Task 36b772ff-757e-42c1-99a3-a7c13534d56f is in state SUCCESS 2025-05-29 01:24:18.898304 | orchestrator | 2025-05-29 01:24:18 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:24:21.954094 | orchestrator | 2025-05-29 01:24:21 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:24:21.954199 | orchestrator | 2025-05-29 01:24:21 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:24:25.024167 | orchestrator | 2025-05-29 01:24:25 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:24:25.024241 | orchestrator | 2025-05-29 01:24:25 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:24:28.089398 | orchestrator | 2025-05-29 01:24:28 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:24:28.089500 | orchestrator | 2025-05-29 01:24:28 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:24:31.140466 | orchestrator | 2025-05-29 01:24:31 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:24:31.140560 | orchestrator | 2025-05-29 01:24:31 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:24:34.192993 | orchestrator | 2025-05-29 01:24:34 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:24:34.193082 | orchestrator | 2025-05-29 01:24:34 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:24:37.240674 | orchestrator | 2025-05-29 01:24:37 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:24:37.240782 | orchestrator | 2025-05-29 01:24:37 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:24:40.292064 | orchestrator | 2025-05-29 01:24:40 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:24:40.292168 | orchestrator | 2025-05-29 01:24:40 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:24:43.344503 | orchestrator | 2025-05-29 01:24:43 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:24:43.344639 | orchestrator | 2025-05-29 01:24:43 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:24:46.400509 | orchestrator | 2025-05-29 01:24:46 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:24:46.400641 | orchestrator | 2025-05-29 01:24:46 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:24:49.453898 | orchestrator | 2025-05-29 01:24:49 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:24:49.454078 | orchestrator | 2025-05-29 01:24:49 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:24:52.502247 | orchestrator | 2025-05-29 01:24:52 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:24:52.502337 | orchestrator | 2025-05-29 01:24:52 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:24:55.554438 | orchestrator | 2025-05-29 01:24:55 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:24:55.554557 | orchestrator | 2025-05-29 01:24:55 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:24:58.604177 | orchestrator | 2025-05-29 01:24:58 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:24:58.604284 | orchestrator | 2025-05-29 01:24:58 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:25:01.660541 | orchestrator | 2025-05-29 01:25:01 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:25:01.660652 | orchestrator | 2025-05-29 01:25:01 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:25:04.703294 | orchestrator | 2025-05-29 01:25:04 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:25:04.703393 | orchestrator | 2025-05-29 01:25:04 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:25:07.756464 | orchestrator | 2025-05-29 01:25:07 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:25:07.756540 | orchestrator | 2025-05-29 01:25:07 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:25:10.807129 | orchestrator | 2025-05-29 01:25:10 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:25:10.807219 | orchestrator | 2025-05-29 01:25:10 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:25:13.853915 | orchestrator | 2025-05-29 01:25:13 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:25:13.854007 | orchestrator | 2025-05-29 01:25:13 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:25:16.904860 | orchestrator | 2025-05-29 01:25:16 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:25:16.904983 | orchestrator | 2025-05-29 01:25:16 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:25:19.953278 | orchestrator | 2025-05-29 01:25:19 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:25:19.953379 | orchestrator | 2025-05-29 01:25:19 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:25:23.006960 | orchestrator | 2025-05-29 01:25:23 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:25:23.007039 | orchestrator | 2025-05-29 01:25:23 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:25:26.061412 | orchestrator | 2025-05-29 01:25:26 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:25:26.061520 | orchestrator | 2025-05-29 01:25:26 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:25:29.111614 | orchestrator | 2025-05-29 01:25:29 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:25:29.111723 | orchestrator | 2025-05-29 01:25:29 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:25:32.160087 | orchestrator | 2025-05-29 01:25:32 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:25:32.160196 | orchestrator | 2025-05-29 01:25:32 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:25:35.214710 | orchestrator | 2025-05-29 01:25:35 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:25:35.214793 | orchestrator | 2025-05-29 01:25:35 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:25:38.268586 | orchestrator | 2025-05-29 01:25:38 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:25:38.268699 | orchestrator | 2025-05-29 01:25:38 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:25:41.320070 | orchestrator | 2025-05-29 01:25:41 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:25:41.320199 | orchestrator | 2025-05-29 01:25:41 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:25:44.365689 | orchestrator | 2025-05-29 01:25:44 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:25:44.365872 | orchestrator | 2025-05-29 01:25:44 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:25:47.412635 | orchestrator | 2025-05-29 01:25:47 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:25:47.412740 | orchestrator | 2025-05-29 01:25:47 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:25:50.468435 | orchestrator | 2025-05-29 01:25:50 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:25:50.468526 | orchestrator | 2025-05-29 01:25:50 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:25:53.519398 | orchestrator | 2025-05-29 01:25:53 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:25:53.519502 | orchestrator | 2025-05-29 01:25:53 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:25:56.566155 | orchestrator | 2025-05-29 01:25:56 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:25:56.566252 | orchestrator | 2025-05-29 01:25:56 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:25:59.612704 | orchestrator | 2025-05-29 01:25:59 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:25:59.612772 | orchestrator | 2025-05-29 01:25:59 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:26:02.662205 | orchestrator | 2025-05-29 01:26:02 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:26:02.663045 | orchestrator | 2025-05-29 01:26:02 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:26:05.713984 | orchestrator | 2025-05-29 01:26:05 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:26:05.714160 | orchestrator | 2025-05-29 01:26:05 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:26:08.765125 | orchestrator | 2025-05-29 01:26:08 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:26:08.765217 | orchestrator | 2025-05-29 01:26:08 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:26:11.816028 | orchestrator | 2025-05-29 01:26:11 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:26:11.816138 | orchestrator | 2025-05-29 01:26:11 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:26:14.864930 | orchestrator | 2025-05-29 01:26:14 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:26:14.865061 | orchestrator | 2025-05-29 01:26:14 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:26:17.913598 | orchestrator | 2025-05-29 01:26:17 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:26:17.913730 | orchestrator | 2025-05-29 01:26:17 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:26:20.965465 | orchestrator | 2025-05-29 01:26:20 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:26:20.965602 | orchestrator | 2025-05-29 01:26:20 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:26:24.011898 | orchestrator | 2025-05-29 01:26:24 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:26:24.012003 | orchestrator | 2025-05-29 01:26:24 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:26:27.067491 | orchestrator | 2025-05-29 01:26:27 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:26:27.067694 | orchestrator | 2025-05-29 01:26:27 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:26:30.105800 | orchestrator | 2025-05-29 01:26:30 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:26:30.105882 | orchestrator | 2025-05-29 01:26:30 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:26:33.149981 | orchestrator | 2025-05-29 01:26:33 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:26:33.150187 | orchestrator | 2025-05-29 01:26:33 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:26:36.210207 | orchestrator | 2025-05-29 01:26:36 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:26:36.210304 | orchestrator | 2025-05-29 01:26:36 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:26:39.254329 | orchestrator | 2025-05-29 01:26:39 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:26:39.254458 | orchestrator | 2025-05-29 01:26:39 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:26:42.301810 | orchestrator | 2025-05-29 01:26:42 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:26:42.301903 | orchestrator | 2025-05-29 01:26:42 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:26:45.367439 | orchestrator | 2025-05-29 01:26:45 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:26:45.367544 | orchestrator | 2025-05-29 01:26:45 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:26:48.423381 | orchestrator | 2025-05-29 01:26:48 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:26:48.423468 | orchestrator | 2025-05-29 01:26:48 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:26:51.477341 | orchestrator | 2025-05-29 01:26:51 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:26:51.477449 | orchestrator | 2025-05-29 01:26:51 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:26:54.530970 | orchestrator | 2025-05-29 01:26:54 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:26:54.531076 | orchestrator | 2025-05-29 01:26:54 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:26:57.571595 | orchestrator | 2025-05-29 01:26:57 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:26:57.571731 | orchestrator | 2025-05-29 01:26:57 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:27:00.622692 | orchestrator | 2025-05-29 01:27:00 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:27:00.622818 | orchestrator | 2025-05-29 01:27:00 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:27:03.672255 | orchestrator | 2025-05-29 01:27:03 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:27:03.672363 | orchestrator | 2025-05-29 01:27:03 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:27:06.726570 | orchestrator | 2025-05-29 01:27:06 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:27:06.726669 | orchestrator | 2025-05-29 01:27:06 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:27:09.778079 | orchestrator | 2025-05-29 01:27:09 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:27:09.778176 | orchestrator | 2025-05-29 01:27:09 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:27:12.825213 | orchestrator | 2025-05-29 01:27:12 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:27:12.825337 | orchestrator | 2025-05-29 01:27:12 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:27:15.877280 | orchestrator | 2025-05-29 01:27:15 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:27:15.877388 | orchestrator | 2025-05-29 01:27:15 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:27:18.924595 | orchestrator | 2025-05-29 01:27:18 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:27:18.924708 | orchestrator | 2025-05-29 01:27:18 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:27:21.975876 | orchestrator | 2025-05-29 01:27:21 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:27:21.975983 | orchestrator | 2025-05-29 01:27:21 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:27:25.019097 | orchestrator | 2025-05-29 01:27:25 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:27:25.019199 | orchestrator | 2025-05-29 01:27:25 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:27:28.076229 | orchestrator | 2025-05-29 01:27:28 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:27:28.076319 | orchestrator | 2025-05-29 01:27:28 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:27:31.129703 | orchestrator | 2025-05-29 01:27:31 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:27:31.129839 | orchestrator | 2025-05-29 01:27:31 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:27:34.186175 | orchestrator | 2025-05-29 01:27:34 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:27:34.186284 | orchestrator | 2025-05-29 01:27:34 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:27:37.239266 | orchestrator | 2025-05-29 01:27:37 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:27:37.239373 | orchestrator | 2025-05-29 01:27:37 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:27:40.289810 | orchestrator | 2025-05-29 01:27:40 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:27:40.289922 | orchestrator | 2025-05-29 01:27:40 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:27:43.348945 | orchestrator | 2025-05-29 01:27:43 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:27:43.349042 | orchestrator | 2025-05-29 01:27:43 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:27:46.402101 | orchestrator | 2025-05-29 01:27:46 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:27:46.402192 | orchestrator | 2025-05-29 01:27:46 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:27:49.449056 | orchestrator | 2025-05-29 01:27:49 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:27:49.449162 | orchestrator | 2025-05-29 01:27:49 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:27:52.499082 | orchestrator | 2025-05-29 01:27:52 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:27:52.499194 | orchestrator | 2025-05-29 01:27:52 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:27:55.548469 | orchestrator | 2025-05-29 01:27:55 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:27:55.548711 | orchestrator | 2025-05-29 01:27:55 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:27:58.597808 | orchestrator | 2025-05-29 01:27:58 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:27:58.598009 | orchestrator | 2025-05-29 01:27:58 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:28:01.650315 | orchestrator | 2025-05-29 01:28:01 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:28:01.650426 | orchestrator | 2025-05-29 01:28:01 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:28:04.698420 | orchestrator | 2025-05-29 01:28:04 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:28:04.698522 | orchestrator | 2025-05-29 01:28:04 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:28:07.748104 | orchestrator | 2025-05-29 01:28:07 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:28:07.748202 | orchestrator | 2025-05-29 01:28:07 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:28:10.803064 | orchestrator | 2025-05-29 01:28:10 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:28:10.803188 | orchestrator | 2025-05-29 01:28:10 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:28:13.846561 | orchestrator | 2025-05-29 01:28:13 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:28:13.846664 | orchestrator | 2025-05-29 01:28:13 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:28:16.902296 | orchestrator | 2025-05-29 01:28:16 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:28:16.902398 | orchestrator | 2025-05-29 01:28:16 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:28:19.952639 | orchestrator | 2025-05-29 01:28:19 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:28:19.952802 | orchestrator | 2025-05-29 01:28:19 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:28:22.996641 | orchestrator | 2025-05-29 01:28:22 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:28:22.996766 | orchestrator | 2025-05-29 01:28:22 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:28:26.043861 | orchestrator | 2025-05-29 01:28:26 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:28:26.043952 | orchestrator | 2025-05-29 01:28:26 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:28:29.094840 | orchestrator | 2025-05-29 01:28:29 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:28:29.094933 | orchestrator | 2025-05-29 01:28:29 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:28:32.145595 | orchestrator | 2025-05-29 01:28:32 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:28:32.145691 | orchestrator | 2025-05-29 01:28:32 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:28:35.193530 | orchestrator | 2025-05-29 01:28:35 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:28:35.193624 | orchestrator | 2025-05-29 01:28:35 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:28:38.241195 | orchestrator | 2025-05-29 01:28:38 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:28:38.241298 | orchestrator | 2025-05-29 01:28:38 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:28:41.291377 | orchestrator | 2025-05-29 01:28:41 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:28:41.291475 | orchestrator | 2025-05-29 01:28:41 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:28:44.343145 | orchestrator | 2025-05-29 01:28:44 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:28:44.343352 | orchestrator | 2025-05-29 01:28:44 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:28:47.393215 | orchestrator | 2025-05-29 01:28:47 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:28:47.393321 | orchestrator | 2025-05-29 01:28:47 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:28:50.443232 | orchestrator | 2025-05-29 01:28:50 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:28:50.443336 | orchestrator | 2025-05-29 01:28:50 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:28:53.497163 | orchestrator | 2025-05-29 01:28:53 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:28:53.497269 | orchestrator | 2025-05-29 01:28:53 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:28:56.542495 | orchestrator | 2025-05-29 01:28:56 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:28:56.542598 | orchestrator | 2025-05-29 01:28:56 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:28:59.594418 | orchestrator | 2025-05-29 01:28:59 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:28:59.594540 | orchestrator | 2025-05-29 01:28:59 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:29:02.646472 | orchestrator | 2025-05-29 01:29:02 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:29:02.646555 | orchestrator | 2025-05-29 01:29:02 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:29:05.691358 | orchestrator | 2025-05-29 01:29:05 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:29:05.691467 | orchestrator | 2025-05-29 01:29:05 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:29:08.735431 | orchestrator | 2025-05-29 01:29:08 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:29:08.735560 | orchestrator | 2025-05-29 01:29:08 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:29:11.786004 | orchestrator | 2025-05-29 01:29:11 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:29:11.786214 | orchestrator | 2025-05-29 01:29:11 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:29:14.842574 | orchestrator | 2025-05-29 01:29:14 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:29:14.842681 | orchestrator | 2025-05-29 01:29:14 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:29:17.885569 | orchestrator | 2025-05-29 01:29:17 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:29:17.885686 | orchestrator | 2025-05-29 01:29:17 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:29:20.937292 | orchestrator | 2025-05-29 01:29:20 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:29:20.937384 | orchestrator | 2025-05-29 01:29:20 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:29:23.989937 | orchestrator | 2025-05-29 01:29:23 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:29:23.990087 | orchestrator | 2025-05-29 01:29:23 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:29:27.050179 | orchestrator | 2025-05-29 01:29:27 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:29:27.050280 | orchestrator | 2025-05-29 01:29:27 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:29:30.106700 | orchestrator | 2025-05-29 01:29:30 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:29:30.106887 | orchestrator | 2025-05-29 01:29:30 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:29:33.160184 | orchestrator | 2025-05-29 01:29:33 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:29:33.160288 | orchestrator | 2025-05-29 01:29:33 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:29:36.215878 | orchestrator | 2025-05-29 01:29:36 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:29:36.215978 | orchestrator | 2025-05-29 01:29:36 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:29:39.260109 | orchestrator | 2025-05-29 01:29:39 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:29:39.260211 | orchestrator | 2025-05-29 01:29:39 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:29:42.311259 | orchestrator | 2025-05-29 01:29:42 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:29:42.311363 | orchestrator | 2025-05-29 01:29:42 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:29:45.371835 | orchestrator | 2025-05-29 01:29:45 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:29:45.371937 | orchestrator | 2025-05-29 01:29:45 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:29:48.423264 | orchestrator | 2025-05-29 01:29:48 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:29:48.423376 | orchestrator | 2025-05-29 01:29:48 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:29:51.478453 | orchestrator | 2025-05-29 01:29:51 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:29:51.478555 | orchestrator | 2025-05-29 01:29:51 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:29:54.531337 | orchestrator | 2025-05-29 01:29:54 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:29:54.531442 | orchestrator | 2025-05-29 01:29:54 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:29:57.576865 | orchestrator | 2025-05-29 01:29:57 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:29:57.576973 | orchestrator | 2025-05-29 01:29:57 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:30:00.626167 | orchestrator | 2025-05-29 01:30:00 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:30:00.626250 | orchestrator | 2025-05-29 01:30:00 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:30:03.677665 | orchestrator | 2025-05-29 01:30:03 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:30:03.677812 | orchestrator | 2025-05-29 01:30:03 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:30:06.742471 | orchestrator | 2025-05-29 01:30:06 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:30:06.742564 | orchestrator | 2025-05-29 01:30:06 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:30:09.791635 | orchestrator | 2025-05-29 01:30:09 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:30:09.791814 | orchestrator | 2025-05-29 01:30:09 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:30:12.849690 | orchestrator | 2025-05-29 01:30:12 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:30:12.849845 | orchestrator | 2025-05-29 01:30:12 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:30:15.901653 | orchestrator | 2025-05-29 01:30:15 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:30:15.901872 | orchestrator | 2025-05-29 01:30:15 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:30:18.960310 | orchestrator | 2025-05-29 01:30:18 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:30:18.960422 | orchestrator | 2025-05-29 01:30:18 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:30:22.011178 | orchestrator | 2025-05-29 01:30:22 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:30:22.011271 | orchestrator | 2025-05-29 01:30:22 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:30:25.066242 | orchestrator | 2025-05-29 01:30:25 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:30:25.066350 | orchestrator | 2025-05-29 01:30:25 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:30:28.131214 | orchestrator | 2025-05-29 01:30:28 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:30:28.131308 | orchestrator | 2025-05-29 01:30:28 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:30:31.181435 | orchestrator | 2025-05-29 01:30:31 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:30:31.181521 | orchestrator | 2025-05-29 01:30:31 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:30:34.230830 | orchestrator | 2025-05-29 01:30:34 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:30:34.230935 | orchestrator | 2025-05-29 01:30:34 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:30:37.282325 | orchestrator | 2025-05-29 01:30:37 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:30:37.282434 | orchestrator | 2025-05-29 01:30:37 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:30:40.329510 | orchestrator | 2025-05-29 01:30:40 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:30:40.329602 | orchestrator | 2025-05-29 01:30:40 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:30:43.376223 | orchestrator | 2025-05-29 01:30:43 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:30:43.376324 | orchestrator | 2025-05-29 01:30:43 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:30:46.426499 | orchestrator | 2025-05-29 01:30:46 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:30:46.426604 | orchestrator | 2025-05-29 01:30:46 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:30:49.474483 | orchestrator | 2025-05-29 01:30:49 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:30:49.474574 | orchestrator | 2025-05-29 01:30:49 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:30:52.528323 | orchestrator | 2025-05-29 01:30:52 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:30:52.528425 | orchestrator | 2025-05-29 01:30:52 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:30:55.577136 | orchestrator | 2025-05-29 01:30:55 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:30:55.577244 | orchestrator | 2025-05-29 01:30:55 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:30:58.627342 | orchestrator | 2025-05-29 01:30:58 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:30:58.627444 | orchestrator | 2025-05-29 01:30:58 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:31:01.679566 | orchestrator | 2025-05-29 01:31:01 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:31:01.679704 | orchestrator | 2025-05-29 01:31:01 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:31:04.718336 | orchestrator | 2025-05-29 01:31:04 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:31:04.718440 | orchestrator | 2025-05-29 01:31:04 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:31:07.769892 | orchestrator | 2025-05-29 01:31:07 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:31:07.769995 | orchestrator | 2025-05-29 01:31:07 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:31:10.828596 | orchestrator | 2025-05-29 01:31:10 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:31:10.828695 | orchestrator | 2025-05-29 01:31:10 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:31:13.877957 | orchestrator | 2025-05-29 01:31:13 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:31:13.878066 | orchestrator | 2025-05-29 01:31:13 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:31:16.935920 | orchestrator | 2025-05-29 01:31:16 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:31:16.936027 | orchestrator | 2025-05-29 01:31:16 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:31:19.986979 | orchestrator | 2025-05-29 01:31:19 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:31:19.987076 | orchestrator | 2025-05-29 01:31:19 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:31:23.047230 | orchestrator | 2025-05-29 01:31:23 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:31:23.047316 | orchestrator | 2025-05-29 01:31:23 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:31:26.093648 | orchestrator | 2025-05-29 01:31:26 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:31:26.093814 | orchestrator | 2025-05-29 01:31:26 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:31:29.147247 | orchestrator | 2025-05-29 01:31:29 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:31:29.147341 | orchestrator | 2025-05-29 01:31:29 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:31:32.196247 | orchestrator | 2025-05-29 01:31:32 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:31:32.196360 | orchestrator | 2025-05-29 01:31:32 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:31:35.243321 | orchestrator | 2025-05-29 01:31:35 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:31:35.243422 | orchestrator | 2025-05-29 01:31:35 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:31:38.299417 | orchestrator | 2025-05-29 01:31:38 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:31:38.299554 | orchestrator | 2025-05-29 01:31:38 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:31:41.350233 | orchestrator | 2025-05-29 01:31:41 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:31:41.350340 | orchestrator | 2025-05-29 01:31:41 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:31:44.399725 | orchestrator | 2025-05-29 01:31:44 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:31:44.399879 | orchestrator | 2025-05-29 01:31:44 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:31:47.453924 | orchestrator | 2025-05-29 01:31:47 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:31:47.454062 | orchestrator | 2025-05-29 01:31:47 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:31:50.506450 | orchestrator | 2025-05-29 01:31:50 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:31:50.506557 | orchestrator | 2025-05-29 01:31:50 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:31:53.566390 | orchestrator | 2025-05-29 01:31:53 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:31:53.566492 | orchestrator | 2025-05-29 01:31:53 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:31:56.616457 | orchestrator | 2025-05-29 01:31:56 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:31:56.616560 | orchestrator | 2025-05-29 01:31:56 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:31:59.661979 | orchestrator | 2025-05-29 01:31:59 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:31:59.662144 | orchestrator | 2025-05-29 01:31:59 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:32:02.708506 | orchestrator | 2025-05-29 01:32:02 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:32:02.708607 | orchestrator | 2025-05-29 01:32:02 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:32:05.754793 | orchestrator | 2025-05-29 01:32:05 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:32:05.754935 | orchestrator | 2025-05-29 01:32:05 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:32:08.818800 | orchestrator | 2025-05-29 01:32:08 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:32:08.818907 | orchestrator | 2025-05-29 01:32:08 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:32:11.862016 | orchestrator | 2025-05-29 01:32:11 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:32:11.862183 | orchestrator | 2025-05-29 01:32:11 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:32:14.916636 | orchestrator | 2025-05-29 01:32:14 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:32:14.916768 | orchestrator | 2025-05-29 01:32:14 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:32:17.977599 | orchestrator | 2025-05-29 01:32:17 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:32:17.977687 | orchestrator | 2025-05-29 01:32:17 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:32:21.033761 | orchestrator | 2025-05-29 01:32:21 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:32:21.033873 | orchestrator | 2025-05-29 01:32:21 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:32:24.086116 | orchestrator | 2025-05-29 01:32:24 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:32:24.086221 | orchestrator | 2025-05-29 01:32:24 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:32:27.135994 | orchestrator | 2025-05-29 01:32:27 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:32:27.136100 | orchestrator | 2025-05-29 01:32:27 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:32:30.194251 | orchestrator | 2025-05-29 01:32:30 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:32:30.194359 | orchestrator | 2025-05-29 01:32:30 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:32:33.250212 | orchestrator | 2025-05-29 01:32:33 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:32:33.250349 | orchestrator | 2025-05-29 01:32:33 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:32:36.304578 | orchestrator | 2025-05-29 01:32:36 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:32:36.304669 | orchestrator | 2025-05-29 01:32:36 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:32:39.351938 | orchestrator | 2025-05-29 01:32:39 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:32:39.352046 | orchestrator | 2025-05-29 01:32:39 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:32:42.418170 | orchestrator | 2025-05-29 01:32:42 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:32:42.418278 | orchestrator | 2025-05-29 01:32:42 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:32:45.470696 | orchestrator | 2025-05-29 01:32:45 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:32:45.470894 | orchestrator | 2025-05-29 01:32:45 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:32:48.522275 | orchestrator | 2025-05-29 01:32:48 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:32:48.522378 | orchestrator | 2025-05-29 01:32:48 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:32:51.576713 | orchestrator | 2025-05-29 01:32:51 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:32:51.576880 | orchestrator | 2025-05-29 01:32:51 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:32:54.632529 | orchestrator | 2025-05-29 01:32:54 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:32:54.632647 | orchestrator | 2025-05-29 01:32:54 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:32:57.677970 | orchestrator | 2025-05-29 01:32:57 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:32:57.678105 | orchestrator | 2025-05-29 01:32:57 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:33:00.732325 | orchestrator | 2025-05-29 01:33:00 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:33:00.732443 | orchestrator | 2025-05-29 01:33:00 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:33:03.783881 | orchestrator | 2025-05-29 01:33:03 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:33:03.784019 | orchestrator | 2025-05-29 01:33:03 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:33:06.834861 | orchestrator | 2025-05-29 01:33:06 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:33:06.834970 | orchestrator | 2025-05-29 01:33:06 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:33:09.886845 | orchestrator | 2025-05-29 01:33:09 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:33:09.886962 | orchestrator | 2025-05-29 01:33:09 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:33:12.943646 | orchestrator | 2025-05-29 01:33:12 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:33:12.943826 | orchestrator | 2025-05-29 01:33:12 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:33:15.992112 | orchestrator | 2025-05-29 01:33:15 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:33:15.992218 | orchestrator | 2025-05-29 01:33:15 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:33:19.041733 | orchestrator | 2025-05-29 01:33:19 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:33:19.041895 | orchestrator | 2025-05-29 01:33:19 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:33:22.088986 | orchestrator | 2025-05-29 01:33:22 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:33:22.089082 | orchestrator | 2025-05-29 01:33:22 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:33:25.139416 | orchestrator | 2025-05-29 01:33:25 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:33:25.139523 | orchestrator | 2025-05-29 01:33:25 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:33:28.196828 | orchestrator | 2025-05-29 01:33:28 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:33:28.197002 | orchestrator | 2025-05-29 01:33:28 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:33:31.248022 | orchestrator | 2025-05-29 01:33:31 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:33:31.248128 | orchestrator | 2025-05-29 01:33:31 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:33:34.298670 | orchestrator | 2025-05-29 01:33:34 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:33:34.298816 | orchestrator | 2025-05-29 01:33:34 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:33:37.353363 | orchestrator | 2025-05-29 01:33:37 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:33:37.353466 | orchestrator | 2025-05-29 01:33:37 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:33:40.406347 | orchestrator | 2025-05-29 01:33:40 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:33:40.406473 | orchestrator | 2025-05-29 01:33:40 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:33:43.461424 | orchestrator | 2025-05-29 01:33:43 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:33:43.461561 | orchestrator | 2025-05-29 01:33:43 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:33:46.521126 | orchestrator | 2025-05-29 01:33:46 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:33:46.521229 | orchestrator | 2025-05-29 01:33:46 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:33:49.580056 | orchestrator | 2025-05-29 01:33:49 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:33:49.580186 | orchestrator | 2025-05-29 01:33:49 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:33:52.635984 | orchestrator | 2025-05-29 01:33:52 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:33:52.636088 | orchestrator | 2025-05-29 01:33:52 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:33:55.689454 | orchestrator | 2025-05-29 01:33:55 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:33:55.689557 | orchestrator | 2025-05-29 01:33:55 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:33:58.742407 | orchestrator | 2025-05-29 01:33:58 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:33:58.742507 | orchestrator | 2025-05-29 01:33:58 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:34:01.803086 | orchestrator | 2025-05-29 01:34:01 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:34:01.803193 | orchestrator | 2025-05-29 01:34:01 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:34:04.860823 | orchestrator | 2025-05-29 01:34:04 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:34:04.860931 | orchestrator | 2025-05-29 01:34:04 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:34:07.932292 | orchestrator | 2025-05-29 01:34:07 | INFO  | Task 4d6ad681-5eb7-4f6d-8144-a3a50b4f1f5d is in state STARTED 2025-05-29 01:34:07.934355 | orchestrator | 2025-05-29 01:34:07 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:34:07.934391 | orchestrator | 2025-05-29 01:34:07 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:34:10.992178 | orchestrator | 2025-05-29 01:34:10 | INFO  | Task 4d6ad681-5eb7-4f6d-8144-a3a50b4f1f5d is in state STARTED 2025-05-29 01:34:10.993350 | orchestrator | 2025-05-29 01:34:10 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:34:10.993385 | orchestrator | 2025-05-29 01:34:10 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:34:14.056293 | orchestrator | 2025-05-29 01:34:14 | INFO  | Task 4d6ad681-5eb7-4f6d-8144-a3a50b4f1f5d is in state STARTED 2025-05-29 01:34:14.057654 | orchestrator | 2025-05-29 01:34:14 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:34:14.058990 | orchestrator | 2025-05-29 01:34:14 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:34:17.114809 | orchestrator | 2025-05-29 01:34:17 | INFO  | Task 4d6ad681-5eb7-4f6d-8144-a3a50b4f1f5d is in state SUCCESS 2025-05-29 01:34:17.115603 | orchestrator | 2025-05-29 01:34:17 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:34:17.115713 | orchestrator | 2025-05-29 01:34:17 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:34:20.171168 | orchestrator | 2025-05-29 01:34:20 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:34:20.171403 | orchestrator | 2025-05-29 01:34:20 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:34:23.216186 | orchestrator | 2025-05-29 01:34:23 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:34:23.216290 | orchestrator | 2025-05-29 01:34:23 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:34:26.265246 | orchestrator | 2025-05-29 01:34:26 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:34:26.265339 | orchestrator | 2025-05-29 01:34:26 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:34:29.316871 | orchestrator | 2025-05-29 01:34:29 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:34:29.316975 | orchestrator | 2025-05-29 01:34:29 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:34:32.360081 | orchestrator | 2025-05-29 01:34:32 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:34:32.360173 | orchestrator | 2025-05-29 01:34:32 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:34:35.403565 | orchestrator | 2025-05-29 01:34:35 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:34:35.403618 | orchestrator | 2025-05-29 01:34:35 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:34:38.456851 | orchestrator | 2025-05-29 01:34:38 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:34:38.456961 | orchestrator | 2025-05-29 01:34:38 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:34:41.504410 | orchestrator | 2025-05-29 01:34:41 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:34:41.504510 | orchestrator | 2025-05-29 01:34:41 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:34:44.554411 | orchestrator | 2025-05-29 01:34:44 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:34:44.554519 | orchestrator | 2025-05-29 01:34:44 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:34:47.596639 | orchestrator | 2025-05-29 01:34:47 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:34:47.596759 | orchestrator | 2025-05-29 01:34:47 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:34:50.648531 | orchestrator | 2025-05-29 01:34:50 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:34:50.648642 | orchestrator | 2025-05-29 01:34:50 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:34:53.697191 | orchestrator | 2025-05-29 01:34:53 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:34:53.697276 | orchestrator | 2025-05-29 01:34:53 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:34:56.749966 | orchestrator | 2025-05-29 01:34:56 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:34:56.750148 | orchestrator | 2025-05-29 01:34:56 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:34:59.792995 | orchestrator | 2025-05-29 01:34:59 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:34:59.793098 | orchestrator | 2025-05-29 01:34:59 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:35:02.838451 | orchestrator | 2025-05-29 01:35:02 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:35:02.838560 | orchestrator | 2025-05-29 01:35:02 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:35:05.892874 | orchestrator | 2025-05-29 01:35:05 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:35:05.892983 | orchestrator | 2025-05-29 01:35:05 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:35:08.940534 | orchestrator | 2025-05-29 01:35:08 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:35:08.940641 | orchestrator | 2025-05-29 01:35:08 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:35:11.992639 | orchestrator | 2025-05-29 01:35:11 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:35:11.992732 | orchestrator | 2025-05-29 01:35:11 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:35:15.047027 | orchestrator | 2025-05-29 01:35:15 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:35:15.047207 | orchestrator | 2025-05-29 01:35:15 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:35:18.101125 | orchestrator | 2025-05-29 01:35:18 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:35:18.101218 | orchestrator | 2025-05-29 01:35:18 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:35:21.148456 | orchestrator | 2025-05-29 01:35:21 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:35:21.148559 | orchestrator | 2025-05-29 01:35:21 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:35:24.189574 | orchestrator | 2025-05-29 01:35:24 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:35:24.189675 | orchestrator | 2025-05-29 01:35:24 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:35:27.237551 | orchestrator | 2025-05-29 01:35:27 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:35:27.237658 | orchestrator | 2025-05-29 01:35:27 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:35:30.291153 | orchestrator | 2025-05-29 01:35:30 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:35:30.291242 | orchestrator | 2025-05-29 01:35:30 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:35:33.346448 | orchestrator | 2025-05-29 01:35:33 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:35:33.346537 | orchestrator | 2025-05-29 01:35:33 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:35:36.396115 | orchestrator | 2025-05-29 01:35:36 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:35:36.396221 | orchestrator | 2025-05-29 01:35:36 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:35:39.442854 | orchestrator | 2025-05-29 01:35:39 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:35:39.442958 | orchestrator | 2025-05-29 01:35:39 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:35:42.504399 | orchestrator | 2025-05-29 01:35:42 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:35:42.504506 | orchestrator | 2025-05-29 01:35:42 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:35:45.554246 | orchestrator | 2025-05-29 01:35:45 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:35:45.554359 | orchestrator | 2025-05-29 01:35:45 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:35:48.608603 | orchestrator | 2025-05-29 01:35:48 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:35:48.608696 | orchestrator | 2025-05-29 01:35:48 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:35:51.656368 | orchestrator | 2025-05-29 01:35:51 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:35:51.656492 | orchestrator | 2025-05-29 01:35:51 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:35:54.708514 | orchestrator | 2025-05-29 01:35:54 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:35:54.708753 | orchestrator | 2025-05-29 01:35:54 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:35:57.761093 | orchestrator | 2025-05-29 01:35:57 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:35:57.761204 | orchestrator | 2025-05-29 01:35:57 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:36:00.812512 | orchestrator | 2025-05-29 01:36:00 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:36:00.812611 | orchestrator | 2025-05-29 01:36:00 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:36:03.860602 | orchestrator | 2025-05-29 01:36:03 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:36:03.860707 | orchestrator | 2025-05-29 01:36:03 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:36:06.912903 | orchestrator | 2025-05-29 01:36:06 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:36:06.913011 | orchestrator | 2025-05-29 01:36:06 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:36:09.962328 | orchestrator | 2025-05-29 01:36:09 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:36:09.962432 | orchestrator | 2025-05-29 01:36:09 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:36:13.007453 | orchestrator | 2025-05-29 01:36:13 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:36:13.007554 | orchestrator | 2025-05-29 01:36:13 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:36:16.057448 | orchestrator | 2025-05-29 01:36:16 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:36:16.057555 | orchestrator | 2025-05-29 01:36:16 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:36:19.110477 | orchestrator | 2025-05-29 01:36:19 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:36:19.110580 | orchestrator | 2025-05-29 01:36:19 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:36:22.164810 | orchestrator | 2025-05-29 01:36:22 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:36:22.164918 | orchestrator | 2025-05-29 01:36:22 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:36:25.212904 | orchestrator | 2025-05-29 01:36:25 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:36:25.213023 | orchestrator | 2025-05-29 01:36:25 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:36:28.264058 | orchestrator | 2025-05-29 01:36:28 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:36:28.264171 | orchestrator | 2025-05-29 01:36:28 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:36:31.313143 | orchestrator | 2025-05-29 01:36:31 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:36:31.313247 | orchestrator | 2025-05-29 01:36:31 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:36:34.363294 | orchestrator | 2025-05-29 01:36:34 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:36:34.363393 | orchestrator | 2025-05-29 01:36:34 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:36:37.425203 | orchestrator | 2025-05-29 01:36:37 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:36:37.425306 | orchestrator | 2025-05-29 01:36:37 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:36:40.474082 | orchestrator | 2025-05-29 01:36:40 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:36:40.474180 | orchestrator | 2025-05-29 01:36:40 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:36:43.520020 | orchestrator | 2025-05-29 01:36:43 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:36:43.520124 | orchestrator | 2025-05-29 01:36:43 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:36:46.565635 | orchestrator | 2025-05-29 01:36:46 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:36:46.565733 | orchestrator | 2025-05-29 01:36:46 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:36:49.622250 | orchestrator | 2025-05-29 01:36:49 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:36:49.622341 | orchestrator | 2025-05-29 01:36:49 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:36:52.675074 | orchestrator | 2025-05-29 01:36:52 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:36:52.675187 | orchestrator | 2025-05-29 01:36:52 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:36:55.725835 | orchestrator | 2025-05-29 01:36:55 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:36:55.725915 | orchestrator | 2025-05-29 01:36:55 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:36:58.781973 | orchestrator | 2025-05-29 01:36:58 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:36:58.782171 | orchestrator | 2025-05-29 01:36:58 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:37:01.829021 | orchestrator | 2025-05-29 01:37:01 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:37:01.829201 | orchestrator | 2025-05-29 01:37:01 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:37:04.875357 | orchestrator | 2025-05-29 01:37:04 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:37:04.875464 | orchestrator | 2025-05-29 01:37:04 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:37:07.920686 | orchestrator | 2025-05-29 01:37:07 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:37:07.920851 | orchestrator | 2025-05-29 01:37:07 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:37:10.977645 | orchestrator | 2025-05-29 01:37:10 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:37:10.977706 | orchestrator | 2025-05-29 01:37:10 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:37:14.023229 | orchestrator | 2025-05-29 01:37:14 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:37:14.023313 | orchestrator | 2025-05-29 01:37:14 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:37:17.072593 | orchestrator | 2025-05-29 01:37:17 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:37:17.072734 | orchestrator | 2025-05-29 01:37:17 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:37:20.123280 | orchestrator | 2025-05-29 01:37:20 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:37:20.123392 | orchestrator | 2025-05-29 01:37:20 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:37:23.178269 | orchestrator | 2025-05-29 01:37:23 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:37:23.178374 | orchestrator | 2025-05-29 01:37:23 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:37:26.225176 | orchestrator | 2025-05-29 01:37:26 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:37:26.225285 | orchestrator | 2025-05-29 01:37:26 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:37:29.269964 | orchestrator | 2025-05-29 01:37:29 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:37:29.270125 | orchestrator | 2025-05-29 01:37:29 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:37:32.317285 | orchestrator | 2025-05-29 01:37:32 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:37:32.317372 | orchestrator | 2025-05-29 01:37:32 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:37:35.363409 | orchestrator | 2025-05-29 01:37:35 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:37:35.363542 | orchestrator | 2025-05-29 01:37:35 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:37:38.413713 | orchestrator | 2025-05-29 01:37:38 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:37:38.413902 | orchestrator | 2025-05-29 01:37:38 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:37:41.466674 | orchestrator | 2025-05-29 01:37:41 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:37:41.466870 | orchestrator | 2025-05-29 01:37:41 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:37:44.525399 | orchestrator | 2025-05-29 01:37:44 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:37:44.525549 | orchestrator | 2025-05-29 01:37:44 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:37:47.579889 | orchestrator | 2025-05-29 01:37:47 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:37:47.580003 | orchestrator | 2025-05-29 01:37:47 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:37:50.637629 | orchestrator | 2025-05-29 01:37:50 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:37:50.637751 | orchestrator | 2025-05-29 01:37:50 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:37:53.682407 | orchestrator | 2025-05-29 01:37:53 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:37:53.682502 | orchestrator | 2025-05-29 01:37:53 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:37:56.733058 | orchestrator | 2025-05-29 01:37:56 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:37:56.733166 | orchestrator | 2025-05-29 01:37:56 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:37:59.780835 | orchestrator | 2025-05-29 01:37:59 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:37:59.780946 | orchestrator | 2025-05-29 01:37:59 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:38:02.821518 | orchestrator | 2025-05-29 01:38:02 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:38:02.821630 | orchestrator | 2025-05-29 01:38:02 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:38:05.873211 | orchestrator | 2025-05-29 01:38:05 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:38:05.873340 | orchestrator | 2025-05-29 01:38:05 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:38:08.924180 | orchestrator | 2025-05-29 01:38:08 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:38:08.924311 | orchestrator | 2025-05-29 01:38:08 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:38:11.974318 | orchestrator | 2025-05-29 01:38:11 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:38:11.974422 | orchestrator | 2025-05-29 01:38:11 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:38:15.026346 | orchestrator | 2025-05-29 01:38:15 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:38:15.026449 | orchestrator | 2025-05-29 01:38:15 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:38:18.076428 | orchestrator | 2025-05-29 01:38:18 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:38:18.076530 | orchestrator | 2025-05-29 01:38:18 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:38:21.124244 | orchestrator | 2025-05-29 01:38:21 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:38:21.124354 | orchestrator | 2025-05-29 01:38:21 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:38:24.177937 | orchestrator | 2025-05-29 01:38:24 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:38:24.178083 | orchestrator | 2025-05-29 01:38:24 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:38:27.230458 | orchestrator | 2025-05-29 01:38:27 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:38:27.230565 | orchestrator | 2025-05-29 01:38:27 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:38:30.281962 | orchestrator | 2025-05-29 01:38:30 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:38:30.282315 | orchestrator | 2025-05-29 01:38:30 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:38:33.330263 | orchestrator | 2025-05-29 01:38:33 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:38:33.330374 | orchestrator | 2025-05-29 01:38:33 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:38:36.377292 | orchestrator | 2025-05-29 01:38:36 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:38:36.377395 | orchestrator | 2025-05-29 01:38:36 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:38:39.428593 | orchestrator | 2025-05-29 01:38:39 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:38:39.428699 | orchestrator | 2025-05-29 01:38:39 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:38:42.485687 | orchestrator | 2025-05-29 01:38:42 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:38:42.485837 | orchestrator | 2025-05-29 01:38:42 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:38:45.544739 | orchestrator | 2025-05-29 01:38:45 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:38:45.544888 | orchestrator | 2025-05-29 01:38:45 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:38:48.599306 | orchestrator | 2025-05-29 01:38:48 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:38:48.599407 | orchestrator | 2025-05-29 01:38:48 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:38:51.657174 | orchestrator | 2025-05-29 01:38:51 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:38:51.657311 | orchestrator | 2025-05-29 01:38:51 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:38:54.708433 | orchestrator | 2025-05-29 01:38:54 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:38:54.708565 | orchestrator | 2025-05-29 01:38:54 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:38:57.763527 | orchestrator | 2025-05-29 01:38:57 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:38:57.763633 | orchestrator | 2025-05-29 01:38:57 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:39:00.807295 | orchestrator | 2025-05-29 01:39:00 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:39:00.807397 | orchestrator | 2025-05-29 01:39:00 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:39:03.865700 | orchestrator | 2025-05-29 01:39:03 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:39:03.865874 | orchestrator | 2025-05-29 01:39:03 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:39:06.913994 | orchestrator | 2025-05-29 01:39:06 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:39:06.914133 | orchestrator | 2025-05-29 01:39:06 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:39:09.960170 | orchestrator | 2025-05-29 01:39:09 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:39:09.960279 | orchestrator | 2025-05-29 01:39:09 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:39:13.010304 | orchestrator | 2025-05-29 01:39:13 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:39:13.010410 | orchestrator | 2025-05-29 01:39:13 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:39:16.064099 | orchestrator | 2025-05-29 01:39:16 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:39:16.064233 | orchestrator | 2025-05-29 01:39:16 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:39:19.119568 | orchestrator | 2025-05-29 01:39:19 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:39:19.119645 | orchestrator | 2025-05-29 01:39:19 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:39:22.169096 | orchestrator | 2025-05-29 01:39:22 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:39:22.169208 | orchestrator | 2025-05-29 01:39:22 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:39:25.226170 | orchestrator | 2025-05-29 01:39:25 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:39:25.226257 | orchestrator | 2025-05-29 01:39:25 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:39:28.279905 | orchestrator | 2025-05-29 01:39:28 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:39:28.279995 | orchestrator | 2025-05-29 01:39:28 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:39:31.326613 | orchestrator | 2025-05-29 01:39:31 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:39:31.326716 | orchestrator | 2025-05-29 01:39:31 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:39:34.399457 | orchestrator | 2025-05-29 01:39:34 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:39:34.399574 | orchestrator | 2025-05-29 01:39:34 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:39:37.457959 | orchestrator | 2025-05-29 01:39:37 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:39:37.458130 | orchestrator | 2025-05-29 01:39:37 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:39:40.509176 | orchestrator | 2025-05-29 01:39:40 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:39:40.509283 | orchestrator | 2025-05-29 01:39:40 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:39:43.565980 | orchestrator | 2025-05-29 01:39:43 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:39:43.566291 | orchestrator | 2025-05-29 01:39:43 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:39:46.616036 | orchestrator | 2025-05-29 01:39:46 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:39:46.616144 | orchestrator | 2025-05-29 01:39:46 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:39:49.667489 | orchestrator | 2025-05-29 01:39:49 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:39:49.667593 | orchestrator | 2025-05-29 01:39:49 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:39:52.718549 | orchestrator | 2025-05-29 01:39:52 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:39:52.718652 | orchestrator | 2025-05-29 01:39:52 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:39:55.791241 | orchestrator | 2025-05-29 01:39:55 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:39:55.791341 | orchestrator | 2025-05-29 01:39:55 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:39:58.844696 | orchestrator | 2025-05-29 01:39:58 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:39:58.844870 | orchestrator | 2025-05-29 01:39:58 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:40:01.891586 | orchestrator | 2025-05-29 01:40:01 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:40:01.891729 | orchestrator | 2025-05-29 01:40:01 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:40:04.935255 | orchestrator | 2025-05-29 01:40:04 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:40:04.935348 | orchestrator | 2025-05-29 01:40:04 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:40:07.991637 | orchestrator | 2025-05-29 01:40:07 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:40:07.991727 | orchestrator | 2025-05-29 01:40:07 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:40:11.038216 | orchestrator | 2025-05-29 01:40:11 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:40:11.038305 | orchestrator | 2025-05-29 01:40:11 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:40:14.089298 | orchestrator | 2025-05-29 01:40:14 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:40:14.089396 | orchestrator | 2025-05-29 01:40:14 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:40:17.128642 | orchestrator | 2025-05-29 01:40:17 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:40:17.128759 | orchestrator | 2025-05-29 01:40:17 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:40:20.179918 | orchestrator | 2025-05-29 01:40:20 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:40:20.180024 | orchestrator | 2025-05-29 01:40:20 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:40:23.232624 | orchestrator | 2025-05-29 01:40:23 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:40:23.232718 | orchestrator | 2025-05-29 01:40:23 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:40:26.297253 | orchestrator | 2025-05-29 01:40:26 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:40:26.297345 | orchestrator | 2025-05-29 01:40:26 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:40:29.344407 | orchestrator | 2025-05-29 01:40:29 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:40:29.344571 | orchestrator | 2025-05-29 01:40:29 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:40:32.393352 | orchestrator | 2025-05-29 01:40:32 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:40:32.393435 | orchestrator | 2025-05-29 01:40:32 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:40:35.454105 | orchestrator | 2025-05-29 01:40:35 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:40:35.454211 | orchestrator | 2025-05-29 01:40:35 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:40:38.535190 | orchestrator | 2025-05-29 01:40:38 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:40:38.535293 | orchestrator | 2025-05-29 01:40:38 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:40:41.601505 | orchestrator | 2025-05-29 01:40:41 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:40:41.601618 | orchestrator | 2025-05-29 01:40:41 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:40:44.665636 | orchestrator | 2025-05-29 01:40:44 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:40:44.665742 | orchestrator | 2025-05-29 01:40:44 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:40:47.722274 | orchestrator | 2025-05-29 01:40:47 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:40:47.722415 | orchestrator | 2025-05-29 01:40:47 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:40:50.787723 | orchestrator | 2025-05-29 01:40:50 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:40:50.787873 | orchestrator | 2025-05-29 01:40:50 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:40:53.865497 | orchestrator | 2025-05-29 01:40:53 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:40:53.865610 | orchestrator | 2025-05-29 01:40:53 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:40:56.925736 | orchestrator | 2025-05-29 01:40:56 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:40:56.925907 | orchestrator | 2025-05-29 01:40:56 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:40:59.990509 | orchestrator | 2025-05-29 01:40:59 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:40:59.990614 | orchestrator | 2025-05-29 01:40:59 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:41:03.039100 | orchestrator | 2025-05-29 01:41:03 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:41:03.039186 | orchestrator | 2025-05-29 01:41:03 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:41:06.082618 | orchestrator | 2025-05-29 01:41:06 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:41:06.082730 | orchestrator | 2025-05-29 01:41:06 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:41:09.131485 | orchestrator | 2025-05-29 01:41:09 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:41:09.131580 | orchestrator | 2025-05-29 01:41:09 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:41:12.176504 | orchestrator | 2025-05-29 01:41:12 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:41:12.176611 | orchestrator | 2025-05-29 01:41:12 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:41:15.227021 | orchestrator | 2025-05-29 01:41:15 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:41:15.227129 | orchestrator | 2025-05-29 01:41:15 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:41:18.279254 | orchestrator | 2025-05-29 01:41:18 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:41:18.279355 | orchestrator | 2025-05-29 01:41:18 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:41:21.330570 | orchestrator | 2025-05-29 01:41:21 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:41:21.330667 | orchestrator | 2025-05-29 01:41:21 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:41:24.381499 | orchestrator | 2025-05-29 01:41:24 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:41:24.381591 | orchestrator | 2025-05-29 01:41:24 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:41:27.428812 | orchestrator | 2025-05-29 01:41:27 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:41:27.428950 | orchestrator | 2025-05-29 01:41:27 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:41:30.473754 | orchestrator | 2025-05-29 01:41:30 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:41:30.473899 | orchestrator | 2025-05-29 01:41:30 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:41:33.519740 | orchestrator | 2025-05-29 01:41:33 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:41:33.519911 | orchestrator | 2025-05-29 01:41:33 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:41:36.569172 | orchestrator | 2025-05-29 01:41:36 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:41:36.569280 | orchestrator | 2025-05-29 01:41:36 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:41:39.621424 | orchestrator | 2025-05-29 01:41:39 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:41:39.621529 | orchestrator | 2025-05-29 01:41:39 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:41:42.675786 | orchestrator | 2025-05-29 01:41:42 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:41:42.675922 | orchestrator | 2025-05-29 01:41:42 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:41:45.726827 | orchestrator | 2025-05-29 01:41:45 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:41:45.726966 | orchestrator | 2025-05-29 01:41:45 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:41:48.774372 | orchestrator | 2025-05-29 01:41:48 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:41:48.774478 | orchestrator | 2025-05-29 01:41:48 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:41:51.826291 | orchestrator | 2025-05-29 01:41:51 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:41:51.826394 | orchestrator | 2025-05-29 01:41:51 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:41:54.874484 | orchestrator | 2025-05-29 01:41:54 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:41:54.874591 | orchestrator | 2025-05-29 01:41:54 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:41:57.919259 | orchestrator | 2025-05-29 01:41:57 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:41:57.919361 | orchestrator | 2025-05-29 01:41:57 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:42:00.973691 | orchestrator | 2025-05-29 01:42:00 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:42:00.973803 | orchestrator | 2025-05-29 01:42:00 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:42:04.031127 | orchestrator | 2025-05-29 01:42:04 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:42:04.031224 | orchestrator | 2025-05-29 01:42:04 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:42:07.081612 | orchestrator | 2025-05-29 01:42:07 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:42:07.081727 | orchestrator | 2025-05-29 01:42:07 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:42:10.132310 | orchestrator | 2025-05-29 01:42:10 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:42:10.132429 | orchestrator | 2025-05-29 01:42:10 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:42:13.180317 | orchestrator | 2025-05-29 01:42:13 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:42:13.180426 | orchestrator | 2025-05-29 01:42:13 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:42:16.218693 | orchestrator | 2025-05-29 01:42:16 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:42:16.218817 | orchestrator | 2025-05-29 01:42:16 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:42:19.274008 | orchestrator | 2025-05-29 01:42:19 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:42:19.274204 | orchestrator | 2025-05-29 01:42:19 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:42:22.326256 | orchestrator | 2025-05-29 01:42:22 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:42:22.326346 | orchestrator | 2025-05-29 01:42:22 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:42:25.372782 | orchestrator | 2025-05-29 01:42:25 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:42:25.372903 | orchestrator | 2025-05-29 01:42:25 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:42:28.423104 | orchestrator | 2025-05-29 01:42:28 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:42:28.423207 | orchestrator | 2025-05-29 01:42:28 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:42:31.475444 | orchestrator | 2025-05-29 01:42:31 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:42:31.475575 | orchestrator | 2025-05-29 01:42:31 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:42:34.541431 | orchestrator | 2025-05-29 01:42:34 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:42:34.541533 | orchestrator | 2025-05-29 01:42:34 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:42:37.579343 | orchestrator | 2025-05-29 01:42:37 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:42:37.579455 | orchestrator | 2025-05-29 01:42:37 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:42:40.629906 | orchestrator | 2025-05-29 01:42:40 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:42:40.629993 | orchestrator | 2025-05-29 01:42:40 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:42:43.680310 | orchestrator | 2025-05-29 01:42:43 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:42:43.680400 | orchestrator | 2025-05-29 01:42:43 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:42:46.723794 | orchestrator | 2025-05-29 01:42:46 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:42:46.723970 | orchestrator | 2025-05-29 01:42:46 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:42:49.772773 | orchestrator | 2025-05-29 01:42:49 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:42:49.772938 | orchestrator | 2025-05-29 01:42:49 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:42:52.818635 | orchestrator | 2025-05-29 01:42:52 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:42:52.818748 | orchestrator | 2025-05-29 01:42:52 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:42:55.868750 | orchestrator | 2025-05-29 01:42:55 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:42:55.868847 | orchestrator | 2025-05-29 01:42:55 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:42:58.922563 | orchestrator | 2025-05-29 01:42:58 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:42:58.922697 | orchestrator | 2025-05-29 01:42:58 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:43:01.970523 | orchestrator | 2025-05-29 01:43:01 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:43:01.970661 | orchestrator | 2025-05-29 01:43:01 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:43:05.020691 | orchestrator | 2025-05-29 01:43:05 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:43:05.020790 | orchestrator | 2025-05-29 01:43:05 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:43:08.065694 | orchestrator | 2025-05-29 01:43:08 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:43:08.065798 | orchestrator | 2025-05-29 01:43:08 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:43:11.115455 | orchestrator | 2025-05-29 01:43:11 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:43:11.115564 | orchestrator | 2025-05-29 01:43:11 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:43:14.165589 | orchestrator | 2025-05-29 01:43:14 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:43:14.165689 | orchestrator | 2025-05-29 01:43:14 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:43:17.216276 | orchestrator | 2025-05-29 01:43:17 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:43:17.216398 | orchestrator | 2025-05-29 01:43:17 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:43:20.271001 | orchestrator | 2025-05-29 01:43:20 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:43:20.271104 | orchestrator | 2025-05-29 01:43:20 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:43:23.318097 | orchestrator | 2025-05-29 01:43:23 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:43:23.318203 | orchestrator | 2025-05-29 01:43:23 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:43:26.362960 | orchestrator | 2025-05-29 01:43:26 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:43:26.363073 | orchestrator | 2025-05-29 01:43:26 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:43:29.413423 | orchestrator | 2025-05-29 01:43:29 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:43:29.413555 | orchestrator | 2025-05-29 01:43:29 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:43:32.462770 | orchestrator | 2025-05-29 01:43:32 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:43:32.462932 | orchestrator | 2025-05-29 01:43:32 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:43:35.503735 | orchestrator | 2025-05-29 01:43:35 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:43:35.503841 | orchestrator | 2025-05-29 01:43:35 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:43:38.554260 | orchestrator | 2025-05-29 01:43:38 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:43:38.554365 | orchestrator | 2025-05-29 01:43:38 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:43:41.604925 | orchestrator | 2025-05-29 01:43:41 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:43:41.605032 | orchestrator | 2025-05-29 01:43:41 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:43:44.657610 | orchestrator | 2025-05-29 01:43:44 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:43:44.657716 | orchestrator | 2025-05-29 01:43:44 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:43:47.704805 | orchestrator | 2025-05-29 01:43:47 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:43:47.704946 | orchestrator | 2025-05-29 01:43:47 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:43:50.754924 | orchestrator | 2025-05-29 01:43:50 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:43:50.755057 | orchestrator | 2025-05-29 01:43:50 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:43:53.804972 | orchestrator | 2025-05-29 01:43:53 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:43:53.805080 | orchestrator | 2025-05-29 01:43:53 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:43:56.852372 | orchestrator | 2025-05-29 01:43:56 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:43:56.852477 | orchestrator | 2025-05-29 01:43:56 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:43:59.906471 | orchestrator | 2025-05-29 01:43:59 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:43:59.906587 | orchestrator | 2025-05-29 01:43:59 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:44:02.947365 | orchestrator | 2025-05-29 01:44:02 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:44:02.947466 | orchestrator | 2025-05-29 01:44:02 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:44:06.002094 | orchestrator | 2025-05-29 01:44:05 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:44:06.003843 | orchestrator | 2025-05-29 01:44:06 | INFO  | Task 0e4f8155-66f3-4785-aca7-78a577898e1f is in state STARTED 2025-05-29 01:44:06.003918 | orchestrator | 2025-05-29 01:44:06 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:44:09.059829 | orchestrator | 2025-05-29 01:44:09 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:44:09.061737 | orchestrator | 2025-05-29 01:44:09 | INFO  | Task 0e4f8155-66f3-4785-aca7-78a577898e1f is in state STARTED 2025-05-29 01:44:09.061776 | orchestrator | 2025-05-29 01:44:09 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:44:12.111139 | orchestrator | 2025-05-29 01:44:12 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:44:12.112283 | orchestrator | 2025-05-29 01:44:12 | INFO  | Task 0e4f8155-66f3-4785-aca7-78a577898e1f is in state STARTED 2025-05-29 01:44:12.112368 | orchestrator | 2025-05-29 01:44:12 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:44:15.167470 | orchestrator | 2025-05-29 01:44:15 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:44:15.168780 | orchestrator | 2025-05-29 01:44:15 | INFO  | Task 0e4f8155-66f3-4785-aca7-78a577898e1f is in state STARTED 2025-05-29 01:44:15.168813 | orchestrator | 2025-05-29 01:44:15 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:44:18.215170 | orchestrator | 2025-05-29 01:44:18 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:44:18.215646 | orchestrator | 2025-05-29 01:44:18 | INFO  | Task 0e4f8155-66f3-4785-aca7-78a577898e1f is in state SUCCESS 2025-05-29 01:44:18.215681 | orchestrator | 2025-05-29 01:44:18 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:44:21.263142 | orchestrator | 2025-05-29 01:44:21 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:44:21.263228 | orchestrator | 2025-05-29 01:44:21 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:44:24.306515 | orchestrator | 2025-05-29 01:44:24 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:44:24.306622 | orchestrator | 2025-05-29 01:44:24 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:44:27.355447 | orchestrator | 2025-05-29 01:44:27 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:44:27.355583 | orchestrator | 2025-05-29 01:44:27 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:44:30.402886 | orchestrator | 2025-05-29 01:44:30 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:44:30.403050 | orchestrator | 2025-05-29 01:44:30 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:44:33.453799 | orchestrator | 2025-05-29 01:44:33 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:44:33.453935 | orchestrator | 2025-05-29 01:44:33 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:44:36.507143 | orchestrator | 2025-05-29 01:44:36 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:44:36.507251 | orchestrator | 2025-05-29 01:44:36 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:44:39.555453 | orchestrator | 2025-05-29 01:44:39 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:44:39.555556 | orchestrator | 2025-05-29 01:44:39 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:44:42.605754 | orchestrator | 2025-05-29 01:44:42 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:44:42.605859 | orchestrator | 2025-05-29 01:44:42 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:44:45.651315 | orchestrator | 2025-05-29 01:44:45 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:44:45.651382 | orchestrator | 2025-05-29 01:44:45 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:44:48.702818 | orchestrator | 2025-05-29 01:44:48 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:44:48.702955 | orchestrator | 2025-05-29 01:44:48 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:44:51.759325 | orchestrator | 2025-05-29 01:44:51 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:44:51.759420 | orchestrator | 2025-05-29 01:44:51 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:44:54.812120 | orchestrator | 2025-05-29 01:44:54 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:44:54.812219 | orchestrator | 2025-05-29 01:44:54 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:44:57.868166 | orchestrator | 2025-05-29 01:44:57 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:44:57.868268 | orchestrator | 2025-05-29 01:44:57 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:45:00.918993 | orchestrator | 2025-05-29 01:45:00 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:45:00.919101 | orchestrator | 2025-05-29 01:45:00 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:45:03.976725 | orchestrator | 2025-05-29 01:45:03 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:45:03.976824 | orchestrator | 2025-05-29 01:45:03 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:45:07.024965 | orchestrator | 2025-05-29 01:45:07 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:45:07.025080 | orchestrator | 2025-05-29 01:45:07 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:45:10.078148 | orchestrator | 2025-05-29 01:45:10 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:45:10.078255 | orchestrator | 2025-05-29 01:45:10 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:45:13.123632 | orchestrator | 2025-05-29 01:45:13 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:45:13.123816 | orchestrator | 2025-05-29 01:45:13 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:45:16.170648 | orchestrator | 2025-05-29 01:45:16 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:45:16.170728 | orchestrator | 2025-05-29 01:45:16 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:45:19.219598 | orchestrator | 2025-05-29 01:45:19 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:45:19.219717 | orchestrator | 2025-05-29 01:45:19 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:45:22.268830 | orchestrator | 2025-05-29 01:45:22 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:45:22.268962 | orchestrator | 2025-05-29 01:45:22 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:45:25.316676 | orchestrator | 2025-05-29 01:45:25 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:45:25.316800 | orchestrator | 2025-05-29 01:45:25 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:45:28.367235 | orchestrator | 2025-05-29 01:45:28 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:45:28.367340 | orchestrator | 2025-05-29 01:45:28 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:45:31.418129 | orchestrator | 2025-05-29 01:45:31 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:45:31.418243 | orchestrator | 2025-05-29 01:45:31 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:45:34.464390 | orchestrator | 2025-05-29 01:45:34 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:45:34.464474 | orchestrator | 2025-05-29 01:45:34 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:45:37.516214 | orchestrator | 2025-05-29 01:45:37 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:45:37.516319 | orchestrator | 2025-05-29 01:45:37 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:45:40.573084 | orchestrator | 2025-05-29 01:45:40 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:45:40.573178 | orchestrator | 2025-05-29 01:45:40 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:45:43.624992 | orchestrator | 2025-05-29 01:45:43 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:45:43.625100 | orchestrator | 2025-05-29 01:45:43 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:45:46.677753 | orchestrator | 2025-05-29 01:45:46 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:45:46.677853 | orchestrator | 2025-05-29 01:45:46 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:45:49.728520 | orchestrator | 2025-05-29 01:45:49 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:45:49.728627 | orchestrator | 2025-05-29 01:45:49 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:45:52.775174 | orchestrator | 2025-05-29 01:45:52 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:45:52.775301 | orchestrator | 2025-05-29 01:45:52 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:45:55.821730 | orchestrator | 2025-05-29 01:45:55 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:45:55.821840 | orchestrator | 2025-05-29 01:45:55 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:45:58.875134 | orchestrator | 2025-05-29 01:45:58 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:45:58.875259 | orchestrator | 2025-05-29 01:45:58 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:46:01.922279 | orchestrator | 2025-05-29 01:46:01 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:46:01.922397 | orchestrator | 2025-05-29 01:46:01 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:46:04.968833 | orchestrator | 2025-05-29 01:46:04 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:46:04.968980 | orchestrator | 2025-05-29 01:46:04 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:46:08.021504 | orchestrator | 2025-05-29 01:46:08 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:46:08.021607 | orchestrator | 2025-05-29 01:46:08 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:46:11.067576 | orchestrator | 2025-05-29 01:46:11 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:46:11.067683 | orchestrator | 2025-05-29 01:46:11 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:46:14.113212 | orchestrator | 2025-05-29 01:46:14 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:46:14.113307 | orchestrator | 2025-05-29 01:46:14 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:46:17.166643 | orchestrator | 2025-05-29 01:46:17 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:46:17.166746 | orchestrator | 2025-05-29 01:46:17 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:46:20.217130 | orchestrator | 2025-05-29 01:46:20 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:46:20.217265 | orchestrator | 2025-05-29 01:46:20 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:46:23.265499 | orchestrator | 2025-05-29 01:46:23 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:46:23.265650 | orchestrator | 2025-05-29 01:46:23 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:46:26.315636 | orchestrator | 2025-05-29 01:46:26 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:46:26.315769 | orchestrator | 2025-05-29 01:46:26 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:46:29.369433 | orchestrator | 2025-05-29 01:46:29 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:46:29.369544 | orchestrator | 2025-05-29 01:46:29 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:46:32.413905 | orchestrator | 2025-05-29 01:46:32 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:46:32.414201 | orchestrator | 2025-05-29 01:46:32 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:46:35.465029 | orchestrator | 2025-05-29 01:46:35 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:46:35.465118 | orchestrator | 2025-05-29 01:46:35 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:46:38.516207 | orchestrator | 2025-05-29 01:46:38 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:46:38.516299 | orchestrator | 2025-05-29 01:46:38 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:46:41.564858 | orchestrator | 2025-05-29 01:46:41 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:46:41.564946 | orchestrator | 2025-05-29 01:46:41 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:46:44.607387 | orchestrator | 2025-05-29 01:46:44 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:46:44.607523 | orchestrator | 2025-05-29 01:46:44 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:46:47.658318 | orchestrator | 2025-05-29 01:46:47 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:46:47.658425 | orchestrator | 2025-05-29 01:46:47 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:46:50.712109 | orchestrator | 2025-05-29 01:46:50 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:46:50.712217 | orchestrator | 2025-05-29 01:46:50 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:46:53.767356 | orchestrator | 2025-05-29 01:46:53 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:46:53.767448 | orchestrator | 2025-05-29 01:46:53 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:46:56.822320 | orchestrator | 2025-05-29 01:46:56 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:46:56.822411 | orchestrator | 2025-05-29 01:46:56 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:46:59.871301 | orchestrator | 2025-05-29 01:46:59 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:46:59.871413 | orchestrator | 2025-05-29 01:46:59 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:47:02.925149 | orchestrator | 2025-05-29 01:47:02 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:47:02.925271 | orchestrator | 2025-05-29 01:47:02 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:47:05.976669 | orchestrator | 2025-05-29 01:47:05 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:47:05.976788 | orchestrator | 2025-05-29 01:47:05 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:47:09.033540 | orchestrator | 2025-05-29 01:47:09 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:47:09.033612 | orchestrator | 2025-05-29 01:47:09 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:47:12.086833 | orchestrator | 2025-05-29 01:47:12 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:47:12.086945 | orchestrator | 2025-05-29 01:47:12 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:47:15.130059 | orchestrator | 2025-05-29 01:47:15 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:47:15.130141 | orchestrator | 2025-05-29 01:47:15 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:47:18.184560 | orchestrator | 2025-05-29 01:47:18 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:47:18.184661 | orchestrator | 2025-05-29 01:47:18 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:47:21.233121 | orchestrator | 2025-05-29 01:47:21 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:47:21.233232 | orchestrator | 2025-05-29 01:47:21 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:47:24.289432 | orchestrator | 2025-05-29 01:47:24 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:47:24.289534 | orchestrator | 2025-05-29 01:47:24 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:47:27.333728 | orchestrator | 2025-05-29 01:47:27 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:47:27.333838 | orchestrator | 2025-05-29 01:47:27 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:47:30.387213 | orchestrator | 2025-05-29 01:47:30 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:47:30.387351 | orchestrator | 2025-05-29 01:47:30 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:47:33.432598 | orchestrator | 2025-05-29 01:47:33 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:47:33.432689 | orchestrator | 2025-05-29 01:47:33 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:47:36.479563 | orchestrator | 2025-05-29 01:47:36 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:47:36.479672 | orchestrator | 2025-05-29 01:47:36 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:47:39.525099 | orchestrator | 2025-05-29 01:47:39 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:47:39.525201 | orchestrator | 2025-05-29 01:47:39 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:47:42.576293 | orchestrator | 2025-05-29 01:47:42 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:47:42.576385 | orchestrator | 2025-05-29 01:47:42 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:47:45.626112 | orchestrator | 2025-05-29 01:47:45 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:47:45.626223 | orchestrator | 2025-05-29 01:47:45 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:47:48.677425 | orchestrator | 2025-05-29 01:47:48 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:47:48.677525 | orchestrator | 2025-05-29 01:47:48 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:47:51.730850 | orchestrator | 2025-05-29 01:47:51 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:47:51.730946 | orchestrator | 2025-05-29 01:47:51 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:47:54.781065 | orchestrator | 2025-05-29 01:47:54 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:47:54.781390 | orchestrator | 2025-05-29 01:47:54 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:47:57.834256 | orchestrator | 2025-05-29 01:47:57 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:47:57.834345 | orchestrator | 2025-05-29 01:47:57 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:48:00.889494 | orchestrator | 2025-05-29 01:48:00 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:48:00.889632 | orchestrator | 2025-05-29 01:48:00 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:48:03.931843 | orchestrator | 2025-05-29 01:48:03 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:48:03.931949 | orchestrator | 2025-05-29 01:48:03 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:48:06.979748 | orchestrator | 2025-05-29 01:48:06 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:48:06.979855 | orchestrator | 2025-05-29 01:48:06 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:48:10.023660 | orchestrator | 2025-05-29 01:48:10 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:48:10.023817 | orchestrator | 2025-05-29 01:48:10 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:48:13.074683 | orchestrator | 2025-05-29 01:48:13 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:48:13.074783 | orchestrator | 2025-05-29 01:48:13 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:48:16.120409 | orchestrator | 2025-05-29 01:48:16 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:48:16.120523 | orchestrator | 2025-05-29 01:48:16 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:48:19.176230 | orchestrator | 2025-05-29 01:48:19 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:48:19.176411 | orchestrator | 2025-05-29 01:48:19 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:48:22.219096 | orchestrator | 2025-05-29 01:48:22 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:48:22.219200 | orchestrator | 2025-05-29 01:48:22 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:48:25.270316 | orchestrator | 2025-05-29 01:48:25 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:48:25.270425 | orchestrator | 2025-05-29 01:48:25 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:48:28.321959 | orchestrator | 2025-05-29 01:48:28 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:48:28.322189 | orchestrator | 2025-05-29 01:48:28 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:48:31.374178 | orchestrator | 2025-05-29 01:48:31 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:48:31.374412 | orchestrator | 2025-05-29 01:48:31 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:48:34.420534 | orchestrator | 2025-05-29 01:48:34 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:48:34.420602 | orchestrator | 2025-05-29 01:48:34 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:48:37.470466 | orchestrator | 2025-05-29 01:48:37 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:48:37.470572 | orchestrator | 2025-05-29 01:48:37 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:48:40.520897 | orchestrator | 2025-05-29 01:48:40 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:48:40.521007 | orchestrator | 2025-05-29 01:48:40 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:48:43.570477 | orchestrator | 2025-05-29 01:48:43 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:48:43.570578 | orchestrator | 2025-05-29 01:48:43 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:48:46.625013 | orchestrator | 2025-05-29 01:48:46 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:48:46.625187 | orchestrator | 2025-05-29 01:48:46 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:48:49.674678 | orchestrator | 2025-05-29 01:48:49 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:48:49.674771 | orchestrator | 2025-05-29 01:48:49 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:48:52.722771 | orchestrator | 2025-05-29 01:48:52 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:48:52.722871 | orchestrator | 2025-05-29 01:48:52 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:48:55.768338 | orchestrator | 2025-05-29 01:48:55 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:48:55.768451 | orchestrator | 2025-05-29 01:48:55 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:48:58.822333 | orchestrator | 2025-05-29 01:48:58 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:48:58.822435 | orchestrator | 2025-05-29 01:48:58 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:49:01.877834 | orchestrator | 2025-05-29 01:49:01 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:49:01.877940 | orchestrator | 2025-05-29 01:49:01 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:49:04.924286 | orchestrator | 2025-05-29 01:49:04 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:49:04.924375 | orchestrator | 2025-05-29 01:49:04 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:49:07.980711 | orchestrator | 2025-05-29 01:49:07 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:49:07.980819 | orchestrator | 2025-05-29 01:49:07 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:49:11.040829 | orchestrator | 2025-05-29 01:49:11 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:49:11.040918 | orchestrator | 2025-05-29 01:49:11 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:49:14.084767 | orchestrator | 2025-05-29 01:49:14 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:49:14.085525 | orchestrator | 2025-05-29 01:49:14 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:49:17.140602 | orchestrator | 2025-05-29 01:49:17 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:49:17.140723 | orchestrator | 2025-05-29 01:49:17 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:49:20.194527 | orchestrator | 2025-05-29 01:49:20 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:49:20.194631 | orchestrator | 2025-05-29 01:49:20 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:49:23.243860 | orchestrator | 2025-05-29 01:49:23 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:49:23.243961 | orchestrator | 2025-05-29 01:49:23 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:49:26.311363 | orchestrator | 2025-05-29 01:49:26 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:49:26.311421 | orchestrator | 2025-05-29 01:49:26 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:49:29.377718 | orchestrator | 2025-05-29 01:49:29 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:49:29.377806 | orchestrator | 2025-05-29 01:49:29 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:49:32.434280 | orchestrator | 2025-05-29 01:49:32 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:49:32.434387 | orchestrator | 2025-05-29 01:49:32 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:49:35.488675 | orchestrator | 2025-05-29 01:49:35 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:49:35.488781 | orchestrator | 2025-05-29 01:49:35 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:49:38.547547 | orchestrator | 2025-05-29 01:49:38 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:49:38.547655 | orchestrator | 2025-05-29 01:49:38 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:49:41.610663 | orchestrator | 2025-05-29 01:49:41 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:49:41.610765 | orchestrator | 2025-05-29 01:49:41 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:49:44.660160 | orchestrator | 2025-05-29 01:49:44 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:49:44.660263 | orchestrator | 2025-05-29 01:49:44 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:49:47.716101 | orchestrator | 2025-05-29 01:49:47 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:49:47.716192 | orchestrator | 2025-05-29 01:49:47 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:49:50.771270 | orchestrator | 2025-05-29 01:49:50 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:49:50.771356 | orchestrator | 2025-05-29 01:49:50 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:49:53.840542 | orchestrator | 2025-05-29 01:49:53 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:49:53.840643 | orchestrator | 2025-05-29 01:49:53 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:49:56.899132 | orchestrator | 2025-05-29 01:49:56 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:49:56.899242 | orchestrator | 2025-05-29 01:49:56 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:49:59.951764 | orchestrator | 2025-05-29 01:49:59 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:49:59.951865 | orchestrator | 2025-05-29 01:49:59 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:50:02.997810 | orchestrator | 2025-05-29 01:50:02 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:50:02.997934 | orchestrator | 2025-05-29 01:50:02 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:50:06.043771 | orchestrator | 2025-05-29 01:50:06 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:50:06.043877 | orchestrator | 2025-05-29 01:50:06 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:50:09.096731 | orchestrator | 2025-05-29 01:50:09 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:50:09.096816 | orchestrator | 2025-05-29 01:50:09 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:50:12.147263 | orchestrator | 2025-05-29 01:50:12 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:50:12.147363 | orchestrator | 2025-05-29 01:50:12 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:50:15.197314 | orchestrator | 2025-05-29 01:50:15 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:50:15.197422 | orchestrator | 2025-05-29 01:50:15 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:50:18.250502 | orchestrator | 2025-05-29 01:50:18 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:50:18.250627 | orchestrator | 2025-05-29 01:50:18 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:50:21.295223 | orchestrator | 2025-05-29 01:50:21 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:50:21.295328 | orchestrator | 2025-05-29 01:50:21 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:50:24.341889 | orchestrator | 2025-05-29 01:50:24 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:50:24.341982 | orchestrator | 2025-05-29 01:50:24 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:50:27.402151 | orchestrator | 2025-05-29 01:50:27 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:50:27.402261 | orchestrator | 2025-05-29 01:50:27 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:50:30.455237 | orchestrator | 2025-05-29 01:50:30 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:50:30.455335 | orchestrator | 2025-05-29 01:50:30 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:50:33.503588 | orchestrator | 2025-05-29 01:50:33 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:50:33.503692 | orchestrator | 2025-05-29 01:50:33 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:50:36.554921 | orchestrator | 2025-05-29 01:50:36 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:50:36.555025 | orchestrator | 2025-05-29 01:50:36 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:50:39.607529 | orchestrator | 2025-05-29 01:50:39 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:50:39.607640 | orchestrator | 2025-05-29 01:50:39 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:50:42.659253 | orchestrator | 2025-05-29 01:50:42 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:50:42.659381 | orchestrator | 2025-05-29 01:50:42 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:50:45.704570 | orchestrator | 2025-05-29 01:50:45 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:50:45.704960 | orchestrator | 2025-05-29 01:50:45 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:50:48.752610 | orchestrator | 2025-05-29 01:50:48 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:50:48.752713 | orchestrator | 2025-05-29 01:50:48 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:50:51.805256 | orchestrator | 2025-05-29 01:50:51 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:50:51.805344 | orchestrator | 2025-05-29 01:50:51 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:50:54.852718 | orchestrator | 2025-05-29 01:50:54 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:50:54.852815 | orchestrator | 2025-05-29 01:50:54 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:50:57.906003 | orchestrator | 2025-05-29 01:50:57 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:50:57.906214 | orchestrator | 2025-05-29 01:50:57 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:51:00.954289 | orchestrator | 2025-05-29 01:51:00 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:51:00.954392 | orchestrator | 2025-05-29 01:51:00 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:51:03.999176 | orchestrator | 2025-05-29 01:51:03 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:51:03.999266 | orchestrator | 2025-05-29 01:51:03 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:51:07.051819 | orchestrator | 2025-05-29 01:51:07 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:51:07.051915 | orchestrator | 2025-05-29 01:51:07 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:51:10.089138 | orchestrator | 2025-05-29 01:51:10 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:51:10.089235 | orchestrator | 2025-05-29 01:51:10 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:51:13.142464 | orchestrator | 2025-05-29 01:51:13 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:51:13.142564 | orchestrator | 2025-05-29 01:51:13 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:51:16.195814 | orchestrator | 2025-05-29 01:51:16 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:51:16.195958 | orchestrator | 2025-05-29 01:51:16 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:51:19.247026 | orchestrator | 2025-05-29 01:51:19 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:51:19.247168 | orchestrator | 2025-05-29 01:51:19 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:51:22.299541 | orchestrator | 2025-05-29 01:51:22 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:51:22.299649 | orchestrator | 2025-05-29 01:51:22 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:51:25.350194 | orchestrator | 2025-05-29 01:51:25 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:51:25.350281 | orchestrator | 2025-05-29 01:51:25 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:51:28.406661 | orchestrator | 2025-05-29 01:51:28 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:51:28.406773 | orchestrator | 2025-05-29 01:51:28 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:51:31.462215 | orchestrator | 2025-05-29 01:51:31 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:51:31.462319 | orchestrator | 2025-05-29 01:51:31 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:51:34.509961 | orchestrator | 2025-05-29 01:51:34 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:51:34.510158 | orchestrator | 2025-05-29 01:51:34 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:51:37.560228 | orchestrator | 2025-05-29 01:51:37 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:51:37.560356 | orchestrator | 2025-05-29 01:51:37 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:51:40.610319 | orchestrator | 2025-05-29 01:51:40 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:51:40.610414 | orchestrator | 2025-05-29 01:51:40 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:51:43.664842 | orchestrator | 2025-05-29 01:51:43 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:51:43.664928 | orchestrator | 2025-05-29 01:51:43 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:51:46.709659 | orchestrator | 2025-05-29 01:51:46 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:51:46.709761 | orchestrator | 2025-05-29 01:51:46 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:51:49.761608 | orchestrator | 2025-05-29 01:51:49 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:51:49.761691 | orchestrator | 2025-05-29 01:51:49 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:51:52.817286 | orchestrator | 2025-05-29 01:51:52 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:51:52.817390 | orchestrator | 2025-05-29 01:51:52 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:51:55.863640 | orchestrator | 2025-05-29 01:51:55 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:51:55.863910 | orchestrator | 2025-05-29 01:51:55 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:51:58.914558 | orchestrator | 2025-05-29 01:51:58 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:51:58.914667 | orchestrator | 2025-05-29 01:51:58 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:52:01.958867 | orchestrator | 2025-05-29 01:52:01 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:52:01.958990 | orchestrator | 2025-05-29 01:52:01 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:52:05.016450 | orchestrator | 2025-05-29 01:52:05 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:52:05.016559 | orchestrator | 2025-05-29 01:52:05 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:52:08.070499 | orchestrator | 2025-05-29 01:52:08 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:52:08.070605 | orchestrator | 2025-05-29 01:52:08 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:52:11.121480 | orchestrator | 2025-05-29 01:52:11 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:52:11.121727 | orchestrator | 2025-05-29 01:52:11 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:52:14.170365 | orchestrator | 2025-05-29 01:52:14 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:52:14.170464 | orchestrator | 2025-05-29 01:52:14 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:52:17.210178 | orchestrator | 2025-05-29 01:52:17 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:52:17.210284 | orchestrator | 2025-05-29 01:52:17 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:52:20.261898 | orchestrator | 2025-05-29 01:52:20 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:52:20.261999 | orchestrator | 2025-05-29 01:52:20 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:52:23.314367 | orchestrator | 2025-05-29 01:52:23 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:52:23.314455 | orchestrator | 2025-05-29 01:52:23 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:52:26.369222 | orchestrator | 2025-05-29 01:52:26 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:52:26.369322 | orchestrator | 2025-05-29 01:52:26 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:52:29.419486 | orchestrator | 2025-05-29 01:52:29 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:52:29.419595 | orchestrator | 2025-05-29 01:52:29 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:52:32.473800 | orchestrator | 2025-05-29 01:52:32 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:52:32.473903 | orchestrator | 2025-05-29 01:52:32 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:52:35.536798 | orchestrator | 2025-05-29 01:52:35 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:52:35.536879 | orchestrator | 2025-05-29 01:52:35 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:52:38.590462 | orchestrator | 2025-05-29 01:52:38 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:52:38.590568 | orchestrator | 2025-05-29 01:52:38 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:52:41.638940 | orchestrator | 2025-05-29 01:52:41 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:52:41.639048 | orchestrator | 2025-05-29 01:52:41 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:52:44.689675 | orchestrator | 2025-05-29 01:52:44 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:52:44.689806 | orchestrator | 2025-05-29 01:52:44 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:52:47.740346 | orchestrator | 2025-05-29 01:52:47 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:52:47.740454 | orchestrator | 2025-05-29 01:52:47 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:52:50.792886 | orchestrator | 2025-05-29 01:52:50 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:52:50.793005 | orchestrator | 2025-05-29 01:52:50 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:52:53.838302 | orchestrator | 2025-05-29 01:52:53 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:52:53.838410 | orchestrator | 2025-05-29 01:52:53 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:52:56.876365 | orchestrator | 2025-05-29 01:52:56 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:52:56.876472 | orchestrator | 2025-05-29 01:52:56 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:52:59.921571 | orchestrator | 2025-05-29 01:52:59 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:52:59.921690 | orchestrator | 2025-05-29 01:52:59 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:53:02.973782 | orchestrator | 2025-05-29 01:53:02 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:53:02.973878 | orchestrator | 2025-05-29 01:53:02 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:53:06.035749 | orchestrator | 2025-05-29 01:53:06 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:53:06.035857 | orchestrator | 2025-05-29 01:53:06 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:53:09.079195 | orchestrator | 2025-05-29 01:53:09 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:53:09.079299 | orchestrator | 2025-05-29 01:53:09 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:53:12.124626 | orchestrator | 2025-05-29 01:53:12 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:53:12.124739 | orchestrator | 2025-05-29 01:53:12 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:53:15.171760 | orchestrator | 2025-05-29 01:53:15 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:53:15.171864 | orchestrator | 2025-05-29 01:53:15 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:53:18.219751 | orchestrator | 2025-05-29 01:53:18 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:53:18.219854 | orchestrator | 2025-05-29 01:53:18 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:53:21.266677 | orchestrator | 2025-05-29 01:53:21 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:53:21.266791 | orchestrator | 2025-05-29 01:53:21 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:53:24.316292 | orchestrator | 2025-05-29 01:53:24 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:53:24.316392 | orchestrator | 2025-05-29 01:53:24 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:53:27.360442 | orchestrator | 2025-05-29 01:53:27 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:53:27.360547 | orchestrator | 2025-05-29 01:53:27 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:53:30.416986 | orchestrator | 2025-05-29 01:53:30 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:53:30.417140 | orchestrator | 2025-05-29 01:53:30 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:53:33.470833 | orchestrator | 2025-05-29 01:53:33 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:53:33.470941 | orchestrator | 2025-05-29 01:53:33 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:53:36.515489 | orchestrator | 2025-05-29 01:53:36 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:53:36.515588 | orchestrator | 2025-05-29 01:53:36 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:53:39.564368 | orchestrator | 2025-05-29 01:53:39 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:53:39.564478 | orchestrator | 2025-05-29 01:53:39 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:53:42.620005 | orchestrator | 2025-05-29 01:53:42 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:53:42.620167 | orchestrator | 2025-05-29 01:53:42 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:53:45.670314 | orchestrator | 2025-05-29 01:53:45 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:53:45.670408 | orchestrator | 2025-05-29 01:53:45 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:53:48.719836 | orchestrator | 2025-05-29 01:53:48 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:53:48.719944 | orchestrator | 2025-05-29 01:53:48 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:53:51.767171 | orchestrator | 2025-05-29 01:53:51 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:53:51.767278 | orchestrator | 2025-05-29 01:53:51 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:53:54.814576 | orchestrator | 2025-05-29 01:53:54 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:53:54.814707 | orchestrator | 2025-05-29 01:53:54 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:53:57.861049 | orchestrator | 2025-05-29 01:53:57 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:53:57.861186 | orchestrator | 2025-05-29 01:53:57 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:54:00.904228 | orchestrator | 2025-05-29 01:54:00 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:54:00.904330 | orchestrator | 2025-05-29 01:54:00 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:54:03.955272 | orchestrator | 2025-05-29 01:54:03 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:54:03.955385 | orchestrator | 2025-05-29 01:54:03 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:54:07.010382 | orchestrator | 2025-05-29 01:54:07 | INFO  | Task 83d29547-ab24-4597-ae8f-e00fbbaad361 is in state STARTED 2025-05-29 01:54:07.012546 | orchestrator | 2025-05-29 01:54:07 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:54:07.012702 | orchestrator | 2025-05-29 01:54:07 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:54:10.072038 | orchestrator | 2025-05-29 01:54:10 | INFO  | Task 83d29547-ab24-4597-ae8f-e00fbbaad361 is in state STARTED 2025-05-29 01:54:10.072318 | orchestrator | 2025-05-29 01:54:10 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:54:10.072889 | orchestrator | 2025-05-29 01:54:10 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:54:13.129207 | orchestrator | 2025-05-29 01:54:13 | INFO  | Task 83d29547-ab24-4597-ae8f-e00fbbaad361 is in state STARTED 2025-05-29 01:54:13.130257 | orchestrator | 2025-05-29 01:54:13 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:54:13.130399 | orchestrator | 2025-05-29 01:54:13 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:54:16.183680 | orchestrator | 2025-05-29 01:54:16 | INFO  | Task 83d29547-ab24-4597-ae8f-e00fbbaad361 is in state SUCCESS 2025-05-29 01:54:16.185070 | orchestrator | 2025-05-29 01:54:16 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:54:16.185156 | orchestrator | 2025-05-29 01:54:16 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:54:19.239566 | orchestrator | 2025-05-29 01:54:19 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:54:19.239654 | orchestrator | 2025-05-29 01:54:19 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:54:22.291740 | orchestrator | 2025-05-29 01:54:22 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:54:22.291839 | orchestrator | 2025-05-29 01:54:22 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:54:25.336434 | orchestrator | 2025-05-29 01:54:25 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:54:25.336534 | orchestrator | 2025-05-29 01:54:25 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:54:28.385780 | orchestrator | 2025-05-29 01:54:28 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:54:28.385897 | orchestrator | 2025-05-29 01:54:28 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:54:31.433469 | orchestrator | 2025-05-29 01:54:31 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:54:31.433572 | orchestrator | 2025-05-29 01:54:31 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:54:34.487437 | orchestrator | 2025-05-29 01:54:34 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:54:34.487550 | orchestrator | 2025-05-29 01:54:34 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:54:37.534593 | orchestrator | 2025-05-29 01:54:37 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:54:37.534679 | orchestrator | 2025-05-29 01:54:37 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:54:40.580856 | orchestrator | 2025-05-29 01:54:40 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:54:40.580946 | orchestrator | 2025-05-29 01:54:40 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:54:43.624600 | orchestrator | 2025-05-29 01:54:43 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:54:43.624705 | orchestrator | 2025-05-29 01:54:43 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:54:46.667553 | orchestrator | 2025-05-29 01:54:46 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:54:46.667711 | orchestrator | 2025-05-29 01:54:46 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:54:49.715955 | orchestrator | 2025-05-29 01:54:49 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:54:49.716066 | orchestrator | 2025-05-29 01:54:49 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:54:52.763597 | orchestrator | 2025-05-29 01:54:52 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:54:52.763711 | orchestrator | 2025-05-29 01:54:52 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:54:55.809538 | orchestrator | 2025-05-29 01:54:55 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:54:55.809641 | orchestrator | 2025-05-29 01:54:55 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:54:58.857024 | orchestrator | 2025-05-29 01:54:58 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:54:58.857222 | orchestrator | 2025-05-29 01:54:58 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:55:01.903689 | orchestrator | 2025-05-29 01:55:01 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:55:01.903810 | orchestrator | 2025-05-29 01:55:01 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:55:04.956623 | orchestrator | 2025-05-29 01:55:04 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:55:04.956737 | orchestrator | 2025-05-29 01:55:04 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:55:08.010496 | orchestrator | 2025-05-29 01:55:08 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:55:08.010601 | orchestrator | 2025-05-29 01:55:08 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:55:11.066522 | orchestrator | 2025-05-29 01:55:11 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:55:11.066629 | orchestrator | 2025-05-29 01:55:11 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:55:14.111734 | orchestrator | 2025-05-29 01:55:14 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:55:14.111837 | orchestrator | 2025-05-29 01:55:14 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:55:17.163931 | orchestrator | 2025-05-29 01:55:17 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:55:17.164042 | orchestrator | 2025-05-29 01:55:17 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:55:20.212175 | orchestrator | 2025-05-29 01:55:20 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:55:20.212314 | orchestrator | 2025-05-29 01:55:20 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:55:23.263579 | orchestrator | 2025-05-29 01:55:23 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:55:23.263716 | orchestrator | 2025-05-29 01:55:23 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:55:26.313563 | orchestrator | 2025-05-29 01:55:26 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:55:26.313672 | orchestrator | 2025-05-29 01:55:26 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:55:29.359060 | orchestrator | 2025-05-29 01:55:29 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:55:29.359218 | orchestrator | 2025-05-29 01:55:29 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:55:32.409208 | orchestrator | 2025-05-29 01:55:32 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:55:32.409312 | orchestrator | 2025-05-29 01:55:32 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:55:35.460928 | orchestrator | 2025-05-29 01:55:35 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:55:35.461009 | orchestrator | 2025-05-29 01:55:35 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:55:38.501543 | orchestrator | 2025-05-29 01:55:38 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:55:38.501646 | orchestrator | 2025-05-29 01:55:38 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:55:41.548012 | orchestrator | 2025-05-29 01:55:41 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:55:41.548198 | orchestrator | 2025-05-29 01:55:41 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:55:44.604593 | orchestrator | 2025-05-29 01:55:44 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:55:44.604736 | orchestrator | 2025-05-29 01:55:44 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:55:47.648886 | orchestrator | 2025-05-29 01:55:47 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:55:47.649001 | orchestrator | 2025-05-29 01:55:47 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:55:50.700897 | orchestrator | 2025-05-29 01:55:50 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:55:50.701003 | orchestrator | 2025-05-29 01:55:50 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:55:53.746806 | orchestrator | 2025-05-29 01:55:53 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:55:53.746915 | orchestrator | 2025-05-29 01:55:53 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:55:56.802965 | orchestrator | 2025-05-29 01:55:56 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:55:56.803044 | orchestrator | 2025-05-29 01:55:56 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:55:59.852411 | orchestrator | 2025-05-29 01:55:59 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:55:59.852501 | orchestrator | 2025-05-29 01:55:59 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:56:02.899727 | orchestrator | 2025-05-29 01:56:02 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:56:02.899827 | orchestrator | 2025-05-29 01:56:02 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:56:05.940321 | orchestrator | 2025-05-29 01:56:05 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:56:05.940427 | orchestrator | 2025-05-29 01:56:05 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:56:08.997915 | orchestrator | 2025-05-29 01:56:08 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:56:08.998080 | orchestrator | 2025-05-29 01:56:08 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:56:12.052324 | orchestrator | 2025-05-29 01:56:12 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:56:12.052396 | orchestrator | 2025-05-29 01:56:12 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:56:15.107577 | orchestrator | 2025-05-29 01:56:15 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:56:15.107683 | orchestrator | 2025-05-29 01:56:15 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:56:18.155675 | orchestrator | 2025-05-29 01:56:18 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:56:18.155783 | orchestrator | 2025-05-29 01:56:18 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:56:21.206565 | orchestrator | 2025-05-29 01:56:21 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:56:21.206674 | orchestrator | 2025-05-29 01:56:21 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:56:24.257920 | orchestrator | 2025-05-29 01:56:24 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:56:24.258199 | orchestrator | 2025-05-29 01:56:24 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:56:27.306919 | orchestrator | 2025-05-29 01:56:27 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:56:27.307028 | orchestrator | 2025-05-29 01:56:27 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:56:30.355310 | orchestrator | 2025-05-29 01:56:30 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:56:30.355450 | orchestrator | 2025-05-29 01:56:30 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:56:33.405752 | orchestrator | 2025-05-29 01:56:33 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:56:33.405857 | orchestrator | 2025-05-29 01:56:33 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:56:36.451062 | orchestrator | 2025-05-29 01:56:36 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:56:36.451219 | orchestrator | 2025-05-29 01:56:36 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:56:39.494861 | orchestrator | 2025-05-29 01:56:39 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:56:39.494947 | orchestrator | 2025-05-29 01:56:39 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:56:42.550353 | orchestrator | 2025-05-29 01:56:42 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:56:42.550449 | orchestrator | 2025-05-29 01:56:42 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:56:45.596911 | orchestrator | 2025-05-29 01:56:45 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:56:45.597041 | orchestrator | 2025-05-29 01:56:45 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:56:48.647074 | orchestrator | 2025-05-29 01:56:48 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:56:48.647216 | orchestrator | 2025-05-29 01:56:48 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:56:51.694468 | orchestrator | 2025-05-29 01:56:51 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:56:51.694600 | orchestrator | 2025-05-29 01:56:51 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:56:54.746492 | orchestrator | 2025-05-29 01:56:54 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:56:54.746576 | orchestrator | 2025-05-29 01:56:54 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:56:57.794826 | orchestrator | 2025-05-29 01:56:57 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:56:57.794945 | orchestrator | 2025-05-29 01:56:57 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:57:00.845563 | orchestrator | 2025-05-29 01:57:00 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:57:00.845679 | orchestrator | 2025-05-29 01:57:00 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:57:03.889599 | orchestrator | 2025-05-29 01:57:03 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:57:03.889686 | orchestrator | 2025-05-29 01:57:03 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:57:06.928722 | orchestrator | 2025-05-29 01:57:06 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:57:06.928832 | orchestrator | 2025-05-29 01:57:06 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:57:09.977715 | orchestrator | 2025-05-29 01:57:09 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:57:09.977815 | orchestrator | 2025-05-29 01:57:09 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:57:13.027472 | orchestrator | 2025-05-29 01:57:13 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:57:13.027587 | orchestrator | 2025-05-29 01:57:13 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:57:16.065750 | orchestrator | 2025-05-29 01:57:16 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:57:16.065866 | orchestrator | 2025-05-29 01:57:16 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:57:19.113660 | orchestrator | 2025-05-29 01:57:19 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:57:19.113769 | orchestrator | 2025-05-29 01:57:19 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:57:22.164290 | orchestrator | 2025-05-29 01:57:22 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:57:22.164394 | orchestrator | 2025-05-29 01:57:22 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:57:25.214966 | orchestrator | 2025-05-29 01:57:25 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:57:25.215075 | orchestrator | 2025-05-29 01:57:25 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:57:28.260917 | orchestrator | 2025-05-29 01:57:28 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:57:28.261037 | orchestrator | 2025-05-29 01:57:28 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:57:31.310925 | orchestrator | 2025-05-29 01:57:31 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:57:31.311038 | orchestrator | 2025-05-29 01:57:31 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:57:34.356553 | orchestrator | 2025-05-29 01:57:34 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:57:34.356638 | orchestrator | 2025-05-29 01:57:34 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:57:37.406800 | orchestrator | 2025-05-29 01:57:37 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:57:37.406908 | orchestrator | 2025-05-29 01:57:37 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:57:40.449513 | orchestrator | 2025-05-29 01:57:40 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:57:40.449618 | orchestrator | 2025-05-29 01:57:40 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:57:43.496758 | orchestrator | 2025-05-29 01:57:43 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:57:43.496865 | orchestrator | 2025-05-29 01:57:43 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:57:46.549667 | orchestrator | 2025-05-29 01:57:46 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:57:46.549782 | orchestrator | 2025-05-29 01:57:46 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:57:49.598640 | orchestrator | 2025-05-29 01:57:49 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:57:49.598751 | orchestrator | 2025-05-29 01:57:49 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:57:52.648395 | orchestrator | 2025-05-29 01:57:52 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:57:52.648495 | orchestrator | 2025-05-29 01:57:52 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:57:55.693527 | orchestrator | 2025-05-29 01:57:55 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:57:55.693657 | orchestrator | 2025-05-29 01:57:55 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:57:58.747737 | orchestrator | 2025-05-29 01:57:58 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:57:58.747847 | orchestrator | 2025-05-29 01:57:58 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:58:01.796247 | orchestrator | 2025-05-29 01:58:01 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:58:01.796378 | orchestrator | 2025-05-29 01:58:01 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:58:04.844377 | orchestrator | 2025-05-29 01:58:04 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:58:04.844481 | orchestrator | 2025-05-29 01:58:04 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:58:07.897000 | orchestrator | 2025-05-29 01:58:07 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:58:07.897103 | orchestrator | 2025-05-29 01:58:07 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:58:10.950069 | orchestrator | 2025-05-29 01:58:10 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:58:10.950205 | orchestrator | 2025-05-29 01:58:10 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:58:14.003923 | orchestrator | 2025-05-29 01:58:14 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:58:14.004039 | orchestrator | 2025-05-29 01:58:14 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:58:17.057280 | orchestrator | 2025-05-29 01:58:17 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:58:17.057388 | orchestrator | 2025-05-29 01:58:17 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:58:20.102011 | orchestrator | 2025-05-29 01:58:20 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:58:20.102231 | orchestrator | 2025-05-29 01:58:20 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:58:23.148339 | orchestrator | 2025-05-29 01:58:23 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:58:23.148451 | orchestrator | 2025-05-29 01:58:23 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:58:26.200513 | orchestrator | 2025-05-29 01:58:26 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:58:26.200618 | orchestrator | 2025-05-29 01:58:26 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:58:29.246502 | orchestrator | 2025-05-29 01:58:29 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:58:29.246592 | orchestrator | 2025-05-29 01:58:29 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:58:32.293518 | orchestrator | 2025-05-29 01:58:32 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:58:32.293616 | orchestrator | 2025-05-29 01:58:32 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:58:35.347296 | orchestrator | 2025-05-29 01:58:35 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:58:35.347366 | orchestrator | 2025-05-29 01:58:35 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:58:38.398578 | orchestrator | 2025-05-29 01:58:38 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:58:38.398817 | orchestrator | 2025-05-29 01:58:38 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:58:41.447320 | orchestrator | 2025-05-29 01:58:41 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:58:41.447416 | orchestrator | 2025-05-29 01:58:41 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:58:44.502752 | orchestrator | 2025-05-29 01:58:44 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:58:44.502890 | orchestrator | 2025-05-29 01:58:44 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:58:47.555002 | orchestrator | 2025-05-29 01:58:47 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:58:47.555203 | orchestrator | 2025-05-29 01:58:47 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:58:50.601951 | orchestrator | 2025-05-29 01:58:50 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:58:50.602191 | orchestrator | 2025-05-29 01:58:50 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:58:53.647903 | orchestrator | 2025-05-29 01:58:53 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:58:53.647991 | orchestrator | 2025-05-29 01:58:53 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:58:56.697186 | orchestrator | 2025-05-29 01:58:56 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:58:56.697301 | orchestrator | 2025-05-29 01:58:56 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:58:59.746504 | orchestrator | 2025-05-29 01:58:59 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:58:59.746603 | orchestrator | 2025-05-29 01:58:59 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:59:02.801994 | orchestrator | 2025-05-29 01:59:02 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:59:02.802201 | orchestrator | 2025-05-29 01:59:02 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:59:05.843802 | orchestrator | 2025-05-29 01:59:05 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:59:05.843907 | orchestrator | 2025-05-29 01:59:05 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:59:08.898440 | orchestrator | 2025-05-29 01:59:08 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:59:08.898535 | orchestrator | 2025-05-29 01:59:08 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:59:11.941592 | orchestrator | 2025-05-29 01:59:11 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:59:11.941671 | orchestrator | 2025-05-29 01:59:11 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:59:14.991978 | orchestrator | 2025-05-29 01:59:14 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:59:14.992112 | orchestrator | 2025-05-29 01:59:14 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:59:18.044386 | orchestrator | 2025-05-29 01:59:18 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:59:18.044493 | orchestrator | 2025-05-29 01:59:18 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:59:21.087164 | orchestrator | 2025-05-29 01:59:21 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:59:21.087331 | orchestrator | 2025-05-29 01:59:21 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:59:24.138724 | orchestrator | 2025-05-29 01:59:24 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:59:24.138825 | orchestrator | 2025-05-29 01:59:24 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:59:27.186465 | orchestrator | 2025-05-29 01:59:27 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:59:27.186571 | orchestrator | 2025-05-29 01:59:27 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:59:30.237083 | orchestrator | 2025-05-29 01:59:30 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:59:30.237171 | orchestrator | 2025-05-29 01:59:30 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:59:33.295273 | orchestrator | 2025-05-29 01:59:33 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:59:33.295412 | orchestrator | 2025-05-29 01:59:33 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:59:36.340351 | orchestrator | 2025-05-29 01:59:36 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:59:36.340460 | orchestrator | 2025-05-29 01:59:36 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:59:39.386702 | orchestrator | 2025-05-29 01:59:39 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:59:39.386812 | orchestrator | 2025-05-29 01:59:39 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:59:42.441212 | orchestrator | 2025-05-29 01:59:42 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:59:42.441317 | orchestrator | 2025-05-29 01:59:42 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:59:45.499217 | orchestrator | 2025-05-29 01:59:45 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:59:45.499350 | orchestrator | 2025-05-29 01:59:45 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:59:48.549148 | orchestrator | 2025-05-29 01:59:48 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:59:48.549236 | orchestrator | 2025-05-29 01:59:48 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:59:51.595678 | orchestrator | 2025-05-29 01:59:51 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:59:51.595790 | orchestrator | 2025-05-29 01:59:51 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:59:54.646787 | orchestrator | 2025-05-29 01:59:54 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:59:54.646917 | orchestrator | 2025-05-29 01:59:54 | INFO  | Wait 1 second(s) until the next check 2025-05-29 01:59:57.693531 | orchestrator | 2025-05-29 01:59:57 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 01:59:57.693640 | orchestrator | 2025-05-29 01:59:57 | INFO  | Wait 1 second(s) until the next check 2025-05-29 02:00:00.745024 | orchestrator | 2025-05-29 02:00:00 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 02:00:00.745157 | orchestrator | 2025-05-29 02:00:00 | INFO  | Wait 1 second(s) until the next check 2025-05-29 02:00:03.793237 | orchestrator | 2025-05-29 02:00:03 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 02:00:03.793342 | orchestrator | 2025-05-29 02:00:03 | INFO  | Wait 1 second(s) until the next check 2025-05-29 02:00:06.842348 | orchestrator | 2025-05-29 02:00:06 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 02:00:06.842459 | orchestrator | 2025-05-29 02:00:06 | INFO  | Wait 1 second(s) until the next check 2025-05-29 02:00:09.892537 | orchestrator | 2025-05-29 02:00:09 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 02:00:09.892626 | orchestrator | 2025-05-29 02:00:09 | INFO  | Wait 1 second(s) until the next check 2025-05-29 02:00:12.942547 | orchestrator | 2025-05-29 02:00:12 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 02:00:12.942636 | orchestrator | 2025-05-29 02:00:12 | INFO  | Wait 1 second(s) until the next check 2025-05-29 02:00:15.990373 | orchestrator | 2025-05-29 02:00:15 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 02:00:15.990478 | orchestrator | 2025-05-29 02:00:15 | INFO  | Wait 1 second(s) until the next check 2025-05-29 02:00:19.035541 | orchestrator | 2025-05-29 02:00:19 | INFO  | Task 380b6076-52ab-40a0-aac4-02415436f773 is in state STARTED 2025-05-29 02:00:19.035675 | orchestrator | 2025-05-29 02:00:19 | INFO  | Wait 1 second(s) until the next check 2025-05-29 02:00:19.437613 | RUN END RESULT_TIMED_OUT: [untrusted : github.com/osism/testbed/playbooks/deploy.yml@main] 2025-05-29 02:00:19.440369 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-05-29 02:00:20.240563 | 2025-05-29 02:00:20.240773 | PLAY [Post output play] 2025-05-29 02:00:20.257829 | 2025-05-29 02:00:20.257994 | LOOP [stage-output : Register sources] 2025-05-29 02:00:20.330926 | 2025-05-29 02:00:20.331380 | TASK [stage-output : Check sudo] 2025-05-29 02:00:21.225923 | orchestrator | sudo: a password is required 2025-05-29 02:00:21.382944 | orchestrator | ok: Runtime: 0:00:00.010008 2025-05-29 02:00:21.404852 | 2025-05-29 02:00:21.405118 | LOOP [stage-output : Set source and destination for files and folders] 2025-05-29 02:00:21.449818 | 2025-05-29 02:00:21.450139 | TASK [stage-output : Build a list of source, dest dictionaries] 2025-05-29 02:00:21.528888 | orchestrator | ok 2025-05-29 02:00:21.538380 | 2025-05-29 02:00:21.538547 | LOOP [stage-output : Ensure target folders exist] 2025-05-29 02:00:22.000548 | orchestrator | ok: "docs" 2025-05-29 02:00:22.000912 | 2025-05-29 02:00:22.244959 | orchestrator | ok: "artifacts" 2025-05-29 02:00:22.492688 | orchestrator | ok: "logs" 2025-05-29 02:00:22.515512 | 2025-05-29 02:00:22.515707 | LOOP [stage-output : Copy files and folders to staging folder] 2025-05-29 02:00:22.569580 | 2025-05-29 02:00:22.569952 | TASK [stage-output : Make all log files readable] 2025-05-29 02:00:22.857155 | orchestrator | ok 2025-05-29 02:00:22.867159 | 2025-05-29 02:00:22.867325 | TASK [stage-output : Rename log files that match extensions_to_txt] 2025-05-29 02:00:22.912590 | orchestrator | skipping: Conditional result was False 2025-05-29 02:00:22.931362 | 2025-05-29 02:00:22.931576 | TASK [stage-output : Discover log files for compression] 2025-05-29 02:00:22.956407 | orchestrator | skipping: Conditional result was False 2025-05-29 02:00:22.967993 | 2025-05-29 02:00:22.968141 | LOOP [stage-output : Archive everything from logs] 2025-05-29 02:00:23.020368 | 2025-05-29 02:00:23.020637 | PLAY [Post cleanup play] 2025-05-29 02:00:23.034775 | 2025-05-29 02:00:23.034987 | TASK [Set cloud fact (Zuul deployment)] 2025-05-29 02:00:23.102770 | orchestrator | ok 2025-05-29 02:00:23.115149 | 2025-05-29 02:00:23.115302 | TASK [Set cloud fact (local deployment)] 2025-05-29 02:00:23.151628 | orchestrator | skipping: Conditional result was False 2025-05-29 02:00:23.167330 | 2025-05-29 02:00:23.167502 | TASK [Clean the cloud environment] 2025-05-29 02:00:23.783061 | orchestrator | 2025-05-29 02:00:23 - clean up servers 2025-05-29 02:00:24.633810 | orchestrator | 2025-05-29 02:00:24 - testbed-manager 2025-05-29 02:00:24.714912 | orchestrator | 2025-05-29 02:00:24 - testbed-node-2 2025-05-29 02:00:24.799096 | orchestrator | 2025-05-29 02:00:24 - testbed-node-1 2025-05-29 02:00:24.899538 | orchestrator | 2025-05-29 02:00:24 - testbed-node-5 2025-05-29 02:00:24.993078 | orchestrator | 2025-05-29 02:00:24 - testbed-node-4 2025-05-29 02:00:25.089929 | orchestrator | 2025-05-29 02:00:25 - testbed-node-0 2025-05-29 02:00:25.187283 | orchestrator | 2025-05-29 02:00:25 - testbed-node-3 2025-05-29 02:00:25.284307 | orchestrator | 2025-05-29 02:00:25 - clean up keypairs 2025-05-29 02:00:25.304691 | orchestrator | 2025-05-29 02:00:25 - testbed 2025-05-29 02:00:25.331980 | orchestrator | 2025-05-29 02:00:25 - wait for servers to be gone 2025-05-29 02:00:34.203466 | orchestrator | 2025-05-29 02:00:34 - clean up ports 2025-05-29 02:00:34.414630 | orchestrator | 2025-05-29 02:00:34 - 0af7086d-b5bd-4bcb-acf3-9e73baa8a109 2025-05-29 02:00:34.838347 | orchestrator | 2025-05-29 02:00:34 - 298c7446-92a4-42d9-bdba-128f8b4f82d3 2025-05-29 02:00:35.098674 | orchestrator | 2025-05-29 02:00:35 - 308345be-e39a-4a67-8073-9b8587dc4ae5 2025-05-29 02:00:35.305737 | orchestrator | 2025-05-29 02:00:35 - a34fd6b8-ee44-418a-a909-1c8eb981eead 2025-05-29 02:00:35.570576 | orchestrator | 2025-05-29 02:00:35 - a35c2a62-72b2-4e72-b4d8-59b7d7f8b592 2025-05-29 02:00:35.881556 | orchestrator | 2025-05-29 02:00:35 - bd238691-e693-4c6c-ab0a-d027ee2a2414 2025-05-29 02:00:36.097132 | orchestrator | 2025-05-29 02:00:36 - bf61b2bb-739e-4d4f-bf3a-6927f628ed28 2025-05-29 02:00:36.304082 | orchestrator | 2025-05-29 02:00:36 - clean up volumes 2025-05-29 02:00:36.436283 | orchestrator | 2025-05-29 02:00:36 - testbed-volume-1-node-base 2025-05-29 02:00:36.479574 | orchestrator | 2025-05-29 02:00:36 - testbed-volume-4-node-base 2025-05-29 02:00:36.524301 | orchestrator | 2025-05-29 02:00:36 - testbed-volume-manager-base 2025-05-29 02:00:36.567218 | orchestrator | 2025-05-29 02:00:36 - testbed-volume-2-node-base 2025-05-29 02:00:36.613060 | orchestrator | 2025-05-29 02:00:36 - testbed-volume-3-node-base 2025-05-29 02:00:36.654864 | orchestrator | 2025-05-29 02:00:36 - testbed-volume-5-node-base 2025-05-29 02:00:36.696590 | orchestrator | 2025-05-29 02:00:36 - testbed-volume-0-node-base 2025-05-29 02:00:36.739141 | orchestrator | 2025-05-29 02:00:36 - testbed-volume-4-node-4 2025-05-29 02:00:36.778238 | orchestrator | 2025-05-29 02:00:36 - testbed-volume-7-node-4 2025-05-29 02:00:36.831718 | orchestrator | 2025-05-29 02:00:36 - testbed-volume-8-node-5 2025-05-29 02:00:36.871456 | orchestrator | 2025-05-29 02:00:36 - testbed-volume-5-node-5 2025-05-29 02:00:36.911670 | orchestrator | 2025-05-29 02:00:36 - testbed-volume-1-node-4 2025-05-29 02:00:36.956856 | orchestrator | 2025-05-29 02:00:36 - testbed-volume-3-node-3 2025-05-29 02:00:37.001925 | orchestrator | 2025-05-29 02:00:37 - testbed-volume-6-node-3 2025-05-29 02:00:37.044642 | orchestrator | 2025-05-29 02:00:37 - testbed-volume-2-node-5 2025-05-29 02:00:37.088848 | orchestrator | 2025-05-29 02:00:37 - testbed-volume-0-node-3 2025-05-29 02:00:37.127893 | orchestrator | 2025-05-29 02:00:37 - disconnect routers 2025-05-29 02:00:37.266903 | orchestrator | 2025-05-29 02:00:37 - testbed 2025-05-29 02:00:38.323540 | orchestrator | 2025-05-29 02:00:38 - clean up subnets 2025-05-29 02:00:38.377737 | orchestrator | 2025-05-29 02:00:38 - subnet-testbed-management 2025-05-29 02:00:38.560873 | orchestrator | 2025-05-29 02:00:38 - clean up networks 2025-05-29 02:00:38.742123 | orchestrator | 2025-05-29 02:00:38 - net-testbed-management 2025-05-29 02:00:39.092941 | orchestrator | 2025-05-29 02:00:39 - clean up security groups 2025-05-29 02:00:39.133552 | orchestrator | 2025-05-29 02:00:39 - testbed-node 2025-05-29 02:00:39.286157 | orchestrator | 2025-05-29 02:00:39 - testbed-management 2025-05-29 02:00:39.416463 | orchestrator | 2025-05-29 02:00:39 - clean up floating ips 2025-05-29 02:00:39.454625 | orchestrator | 2025-05-29 02:00:39 - 81.163.193.2 2025-05-29 02:00:39.828363 | orchestrator | 2025-05-29 02:00:39 - clean up routers 2025-05-29 02:00:39.945598 | orchestrator | 2025-05-29 02:00:39 - testbed 2025-05-29 02:00:41.223478 | orchestrator | ok: Runtime: 0:00:17.395742 2025-05-29 02:00:41.228011 | 2025-05-29 02:00:41.228174 | PLAY RECAP 2025-05-29 02:00:41.228326 | orchestrator | ok: 6 changed: 2 unreachable: 0 failed: 0 skipped: 7 rescued: 0 ignored: 0 2025-05-29 02:00:41.228391 | 2025-05-29 02:00:41.387102 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/post.yml@main] 2025-05-29 02:00:41.389541 | POST-RUN START: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2025-05-29 02:00:42.174058 | 2025-05-29 02:00:42.174239 | PLAY [Cleanup play] 2025-05-29 02:00:42.191513 | 2025-05-29 02:00:42.191684 | TASK [Set cloud fact (Zuul deployment)] 2025-05-29 02:00:42.249510 | orchestrator | ok 2025-05-29 02:00:42.259012 | 2025-05-29 02:00:42.259201 | TASK [Set cloud fact (local deployment)] 2025-05-29 02:00:42.294252 | orchestrator | skipping: Conditional result was False 2025-05-29 02:00:42.311422 | 2025-05-29 02:00:42.311609 | TASK [Clean the cloud environment] 2025-05-29 02:00:43.492910 | orchestrator | 2025-05-29 02:00:43 - clean up servers 2025-05-29 02:00:44.087720 | orchestrator | 2025-05-29 02:00:44 - clean up keypairs 2025-05-29 02:00:44.107686 | orchestrator | 2025-05-29 02:00:44 - wait for servers to be gone 2025-05-29 02:00:44.153873 | orchestrator | 2025-05-29 02:00:44 - clean up ports 2025-05-29 02:00:44.231649 | orchestrator | 2025-05-29 02:00:44 - clean up volumes 2025-05-29 02:00:44.298072 | orchestrator | 2025-05-29 02:00:44 - disconnect routers 2025-05-29 02:00:44.325762 | orchestrator | 2025-05-29 02:00:44 - clean up subnets 2025-05-29 02:00:44.344416 | orchestrator | 2025-05-29 02:00:44 - clean up networks 2025-05-29 02:00:44.491642 | orchestrator | 2025-05-29 02:00:44 - clean up security groups 2025-05-29 02:00:44.530687 | orchestrator | 2025-05-29 02:00:44 - clean up floating ips 2025-05-29 02:00:44.555039 | orchestrator | 2025-05-29 02:00:44 - clean up routers 2025-05-29 02:00:44.852182 | orchestrator | ok: Runtime: 0:00:01.460859 2025-05-29 02:00:44.856496 | 2025-05-29 02:00:44.856715 | PLAY RECAP 2025-05-29 02:00:44.856873 | orchestrator | ok: 2 changed: 1 unreachable: 0 failed: 0 skipped: 1 rescued: 0 ignored: 0 2025-05-29 02:00:44.856939 | 2025-05-29 02:00:45.015352 | POST-RUN END RESULT_NORMAL: [untrusted : github.com/osism/testbed/playbooks/cleanup.yml@main] 2025-05-29 02:00:45.016501 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-05-29 02:00:45.850135 | 2025-05-29 02:00:45.850363 | PLAY [Base post-fetch] 2025-05-29 02:00:45.867595 | 2025-05-29 02:00:45.867951 | TASK [fetch-output : Set log path for multiple nodes] 2025-05-29 02:00:45.924102 | orchestrator | skipping: Conditional result was False 2025-05-29 02:00:45.934357 | 2025-05-29 02:00:45.934598 | TASK [fetch-output : Set log path for single node] 2025-05-29 02:00:45.985883 | orchestrator | ok 2025-05-29 02:00:45.996584 | 2025-05-29 02:00:45.996796 | LOOP [fetch-output : Ensure local output dirs] 2025-05-29 02:00:46.529225 | orchestrator -> localhost | ok: "/var/lib/zuul/builds/62ec09aac32a49de9a819e3bd5eb4892/work/logs" 2025-05-29 02:00:46.840845 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/62ec09aac32a49de9a819e3bd5eb4892/work/artifacts" 2025-05-29 02:00:47.131226 | orchestrator -> localhost | changed: "/var/lib/zuul/builds/62ec09aac32a49de9a819e3bd5eb4892/work/docs" 2025-05-29 02:00:47.144220 | 2025-05-29 02:00:47.144405 | LOOP [fetch-output : Collect logs, artifacts and docs] 2025-05-29 02:00:48.148432 | orchestrator | changed: .d..t...... ./ 2025-05-29 02:00:48.148823 | orchestrator | changed: All items complete 2025-05-29 02:00:48.148876 | 2025-05-29 02:00:48.929387 | orchestrator | changed: .d..t...... ./ 2025-05-29 02:00:49.646465 | orchestrator | changed: .d..t...... ./ 2025-05-29 02:00:49.669451 | 2025-05-29 02:00:49.669653 | LOOP [merge-output-to-logs : Move artifacts and docs to logs dir] 2025-05-29 02:00:49.711854 | orchestrator | skipping: Conditional result was False 2025-05-29 02:00:49.716010 | orchestrator | skipping: Conditional result was False 2025-05-29 02:00:49.742086 | 2025-05-29 02:00:49.742240 | PLAY RECAP 2025-05-29 02:00:49.742508 | orchestrator | ok: 3 changed: 2 unreachable: 0 failed: 0 skipped: 2 rescued: 0 ignored: 0 2025-05-29 02:00:49.742557 | 2025-05-29 02:00:49.960788 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post-fetch.yaml@main] 2025-05-29 02:00:49.964158 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-05-29 02:00:50.734152 | 2025-05-29 02:00:50.734372 | PLAY [Base post] 2025-05-29 02:00:50.750381 | 2025-05-29 02:00:50.750544 | TASK [remove-build-sshkey : Remove the build SSH key from all nodes] 2025-05-29 02:00:51.974694 | orchestrator | changed 2025-05-29 02:00:51.984863 | 2025-05-29 02:00:51.985005 | PLAY RECAP 2025-05-29 02:00:51.985078 | orchestrator | ok: 1 changed: 1 unreachable: 0 failed: 0 skipped: 0 rescued: 0 ignored: 0 2025-05-29 02:00:51.985196 | 2025-05-29 02:00:52.133934 | POST-RUN END RESULT_NORMAL: [trusted : github.com/osism/zuul-config/playbooks/base/post.yaml@main] 2025-05-29 02:00:52.136511 | POST-RUN START: [trusted : github.com/osism/zuul-config/playbooks/base/post-logs.yaml@main] 2025-05-29 02:00:52.934666 | 2025-05-29 02:00:52.934900 | PLAY [Base post-logs] 2025-05-29 02:00:52.947241 | 2025-05-29 02:00:52.947554 | TASK [generate-zuul-manifest : Generate Zuul manifest] 2025-05-29 02:00:53.416873 | localhost | changed 2025-05-29 02:00:53.430640 | 2025-05-29 02:00:53.430933 | TASK [generate-zuul-manifest : Return Zuul manifest URL to Zuul] 2025-05-29 02:00:53.457997 | localhost | ok 2025-05-29 02:00:53.461252 | 2025-05-29 02:00:53.461381 | TASK [Set zuul-log-path fact] 2025-05-29 02:00:53.476334 | localhost | ok 2025-05-29 02:00:53.484563 | 2025-05-29 02:00:53.484677 | TASK [set-zuul-log-path-fact : Set log path for a build] 2025-05-29 02:00:53.512130 | localhost | ok 2025-05-29 02:00:53.519480 | 2025-05-29 02:00:53.519763 | TASK [upload-logs : Create log directories] 2025-05-29 02:00:54.060188 | localhost | changed 2025-05-29 02:00:54.063413 | 2025-05-29 02:00:54.063536 | TASK [upload-logs : Ensure logs are readable before uploading] 2025-05-29 02:00:54.605963 | localhost -> localhost | ok: Runtime: 0:00:00.007843 2025-05-29 02:00:54.615917 | 2025-05-29 02:00:54.616119 | TASK [upload-logs : Upload logs to log server] 2025-05-29 02:00:55.218340 | localhost | Output suppressed because no_log was given 2025-05-29 02:00:55.222527 | 2025-05-29 02:00:55.222715 | LOOP [upload-logs : Compress console log and json output] 2025-05-29 02:00:55.289977 | localhost | skipping: Conditional result was False 2025-05-29 02:00:55.296400 | localhost | skipping: Conditional result was False 2025-05-29 02:00:55.304910 | 2025-05-29 02:00:55.305055 | LOOP [upload-logs : Upload compressed console log and json output] 2025-05-29 02:00:55.356176 | localhost | skipping: Conditional result was False 2025-05-29 02:00:55.356716 | 2025-05-29 02:00:55.360781 | localhost | skipping: Conditional result was False 2025-05-29 02:00:55.381216 | 2025-05-29 02:00:55.381675 | LOOP [upload-logs : Upload console log and json output]